Author Topic: Uobičajena interpretacija Hajzenbergovog principa neizvesnosti pogrešna?  (Read 37142 times)

0 Members and 1 Guest are viewing this topic.

scallop

  • 5
  • 3
  • Posts: 26.658
Link je provokacija, a ne tvoja ponuda.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Father Jape

  • 4
  • 3
  • Posts: 6.964
Evo na jednom mestu više odgovora i diskusija koje je ovo što Meho posla provociralo:

http://languagelog.ldc.upenn.edu/nll/?p=7584
Blijedi čovjek na tragu pervertita.
To je ta nezadrživa napaljenost mladosti.
Dušman u odsustvu Dušmana.

https://lingvistickebeleske.wordpress.com

PTY

  • 5
  • 3
  • Posts: 8.602

...
Samo sam valjda skloniji tome da je bitno i da se čuju lupetanja jer onda znaš šta ljudi misle i kako da nekom drugom prilikom formulišeš svoje teze itd.


Tu se (i zbog toga) ne nalazimo: sa principom se ne samo slažem, nego ga i primenjujem, koliko je u mojoj skromnoj moći. Ali fokus je ovde na očuvanju diskursa, a taj fokus nameće primat lateralnosti. Nije problem razgovarati sa čovekom koji naprosto zastupa drugačije mišljenje, to bazirano na specifičnostima perspektive i interesa; ako išta, to je izazov upravo vredan komunikacije, kao što se i iz priloženog jasno vidi.  :wink:  Ali lateralni minimum je neophodan, to kako u informisanosti, tako i zrelosti sagovornika. U tom slučaju, mi ne možemo više da govorimo o "lupetanjima", nego o neophodnoj različitosti stavova, koja je upravo baza suvisle razmene, a time i preduslov svake sadržajne konverzacije. "Lupetanje", s druge strane, nema tu lateralnu privilegiju, pa se stoga i ne može (ne sme, dodala bih, čisto preciznosti radi) uvažavati kao kredibilan ritort, to bar gledano iz perspektive očuvanja diskursa.


Ili, da uprostim: možeš se ti sa budalom natezati do sudnjeg dana u podne, ali on iz te konverzacije neće izaći ništa pametniji, dok ćeš ti izaći ili gluplji ili umorniji. Trećeg ishoda ja tu ne vidim, ako isključimo zabavni aspekt kao takav.


No, naravno, to je samo moje lično mišljenje, koje ti ne namećem (to bar ne sada i ne ovde, za ostatak mog obraćanja priznajem da često ima maliciozni aspekt, no to je upravo rezultat gore navedenih procena, tako da tu nema pomoći ni meni, a kamoli tebi), naravno, nego ga samo podrobnije objašnjavam, to u svrhu potpunijeg razumevanja. 

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Razumem i uvažavam, no vraćam se na priču da govorimo o publikacijama koje popularišu nauku, pa mislim da je korisno da znaju i kad deo njihovog čitalaštva lupeta jer se onda popularisanje može dizajnirati tako da preemptivno odgovori na lupetanje, itd.

PTY

  • 5
  • 3
  • Posts: 8.602
Budalama ne treba popularizovati nauku, nego kondome.

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
 :lol: :lol: :lol: :lol:  Pričaš o mom dayjobu.

PTY

  • 5
  • 3
  • Posts: 8.602
Pa zašto onda na forumu rasplićeš ta dobročinstva zbog kojih ti je mesto u raju već bukirano?  :mrgreen:

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Mogu smo da citiram pokojnog Davora Bobića: U raju je lepo, al' u paklu je ekipa.

PTY

  • 5
  • 3
  • Posts: 8.602
 :lol: :lol: :lol:  a dobro onda, sad me znatno manje grize savest što tako često na prsa primaš one for the tim:evil:

angel011

  • PsychoKitty
  • 5
  • 3
  • Posts: 8.100
  • meow
    • Hronika mačjeg škrabala
Budalama ne treba popularizovati nauku, nego kondome.


 xrofl


Ili sterilizaciju?  :lol:
We're all mad here.

mac

  • 3
  • Posts: 10.019
    • http://www.facebook.com/mihajlo.cvetanovic
I mačke sterilišemo iz potrebe, iako ih volimo. Ako živo biće ne može samo da se ponaša u skladu sa svojom okolinom i svojim mogućnostima, onda okolina mora da se brani.

Josephine

  • Guest

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Dobro, od preporuke korišćenja kontracepcije do nasilne sterilizacije je ipak korak pogolem, nemojmo ih poistovećivati.

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Evo daljih čudesa na temu "slabog merenja" odnosno takvog posmatranja kvantnog sistema da posmatrač ne poremeti isti:
 Physicists snatch a peep into quantum paradox
Quote
Measurement-induced collapse of quantum wavefunction captured in slow motion.
 
 
It is the most fundamental, and yet also the strangest postulate of the theory of quantum mechanics: the idea that a quantum system will catastrophically collapse from a blend of several possible quantum states to just one the moment it is measured by an experimentalist.
 
In textbooks on quantum mechanics, the collapse is depicted as sudden and irreversible. It is also extremely counterintuitive. Researchers have struggled to understand how a measurement can profoundly alter the state that an object is in, rather than just allowing us to learn about an objective reality.
                                                            
A new experiment1sheds some light on this question through the use of weak measurements — indirect probes of quantum systems that tweak a wavefunction slightly while providing partial information about its state, avoiding a sudden collapse.
                                                            
Atomic and solid-state physicist Kater Murch of the University of California, Berkeley, and his colleagues performed a series of weak measurements on a superconducting circuit that was in a superposition — a combination of two quantum states. They did this by monitoring microwaves that had passed through a box containing the circuit, based on the fact that the circuit's electrical oscillations alter the state of the microwaves as they pass through the box. Over a couple of microseconds, those weak measurements captured snapshots of the state of the circuit as it gradually changed from a superposition to just one of the states within that superposition — as if charting the collapse of a quantum wavefunction in slow motion.
                                                            
Although equivalent experiments have been done on the quantum states of photons of light, this is the first time such work has been done in a typically noisier solid-state system. “It demonstrates how much progress we’ve made in the solid state in the past 10 years,” says Murch. “Finally, systems are so pure that we can rival experiments in photons.”
                                                            Slow-motion movie                                                            
The team also found that decoherence, the process by which noise in the environment causes quantum states to decay, can be minimized by repeated weak measurements. Murch says that the microwaves used to probe the superconducting circuit can be thought of as its environment because they are the predominant thing interacting with it. By monitoring the environment, the fluctuations in the microwaves become a known quantity rather than a source of unknown noise.
 
 That enables the quantum state to remain pure, as Murch and his demonstrated — a finding that has a practical consequence. Quantum bits used for computation can be encoded in the state of a superconducting circuit, as they were in the present experiment, but they can also be made from the quantum state of a trapped ion or of an impurity in a crystal. Being able to sustain the coherence of a quantum bit in a solid-state system by making weak measurements ought to be possible in other experimental hardware too. “It’s a very general idea,” says quantum theorist Andrew Jordan at the University of Rochester, New York.
Theorist Alexander Korotkov of the University of California, Riverside, adds that the measurements can be thought of as a kind of 'quantum steering' that helps to keep the system evolving along a quantum path, casting light on the intrinsically gradual nature of any measurement process. "In real life nothing happens instantaneously,” he says.
   Nature doi:10.1038/nature.2013.13899 

PTY

  • 5
  • 3
  • Posts: 8.602
... a evo i tačke, samo ne znam da li je stavljena na rečenicu ili na "i"...  :lol:





http://www.laboratoryequipment.com/news/2013/10/physicists-prove-heisenberg-right




Physicists Prove Heisenberg Right





Fri, 10/18/2013 - 7:00am[





An international team of scientists has provided proof of a key feature of quantum physics – Heisenberg's error-disturbance relation — more than 80 years after it was first suggested.

One of the basic concepts in the world of quantum mechanics is that it is impossible to observe physical objects without affecting them in a significant way; there can be no measurement without disturbance.
In a paper in 1927, Werner Heisenberg, one of the architects of the fundamental theories of modern physics, claimed that this fact could be expressed as an uncertainty relation, describing a reciprocal relation between the accuracy in position and the disturbance in momentum. However, he did not supply any evidence for the theory which was largely based on intuition.


Now Prof. Paul Busch of the Univ. of York, Prof. Pekka Lahti of the Univ. of Turku, Finland and Prof. Reinhard Werner of Leibniz Universität Hannover, Germany have finally provided a precise formulation and proof of the error-disturbance relation in an article published today in the journal Physical Review Letters.


Their work has important implications for the developing field of quantum cryptography and computing, as it reaffirms that quantum-encrypted messages can be transmitted securely since an eavesdropper would necessarily disturb the system carrying the message and this could be detected.


Busch, from York's Department of Mathematics, says, "While the slogan 'no measurement without disturbance' has established itself under the name Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results.


"We have shown that despite recent claims to the contrary, Heisenberg-type inequalities can be proven that describe a trade-off between the precision of a position measurement and the necessary resulting disturbance of momentum and vice-versa."


The research involved the scientists considering how simultaneous measurements of a particle's position and momentum are calibrated. They defined the errors in these measurements as the spreads in the distributions of the outcomes in situations where either the position or the momentum of the particle is well defined. They found that these errors for combined position and momentum measurements obey Heisenberg's principle.


Werner says, "Since I was a student I have been wondering what could be meant by an 'uncontrollable' disturbance of momentum in Heisenberg's Gedanken experiment. In our theorem this is now clear: not only does the momentum change, there is also no way to retrieve it from the post measurement state."


Lahti adds, "It is impressive to witness how the intuitions of the great masters from the very early stage of the development of the then brand new theory turn out to be true."

PTY

  • 5
  • 3
  • Posts: 8.602
nego, posto smo na ovaj topik vec stavljali aktuelne kontroverze po pitanju popularizacije nauke i kojecega, evo jos jedne kontroverze koja polarizuje laicki a bogami i naucni domen jednako:
 
 

 
 
 
Photo by Erik de Castro/Reuters
 Ingo Potrykus is a co-inventor of golden rice, which is genetically engineered to combat blindness and death in children by supplying 60 percent of the vitamin A they need in a typical daily helping of rice. His project has been opposed from the outset by environmental groups.Andy Coghlan: Why did you develop golden rice?
Ingo Potrykus: I got involved because I'm concerned about food security. I realized it's not just about calories, but also about the quality of food. I started working on it in the early 1990s with Peter Beyer. We started on the problem of iron deficiency, but that work didn't pan out, so we switched to tackling vitamin A deficiency.By 1999 we had solved the problem. It was a surprise it worked because from the outset it looked totally crazy.                          Advertisement
  AC: But environmental groups, including Greenpeace, opposed it?
IP: They were against it from the beginning. They said it was fool's gold because children would need to eat several kilograms of it to get their daily requirement. Children only eat around 300 to 400 grams a day. We worked out that Greenpeace wasn’t right, and that the rice contained enough to meet children's needs, but we couldn't prove that because we didn't then have data from an actual trial.One of the cleverest tricks of the anti-GMO movement is to link GMOs so closely to Monsanto. AC: That didn't kill off the project, though?
IP: Indeed no. The next big step was in 2005 when a group at biotech company Syngenta replaced one of the genes intended to produce beta carotene. The original gene, which makes an enzyme called phytoene synthase, came from the narcissus flower, and they replaced it with one from maize that is far more efficient. It produced 20 times more beta carotene, the molecule from carrots that combines with a second molecule of itself once inside our bodies to make a molecule of vitamin A. It was a big success.But again, we couldn't prove we had enough to meet children's needs, so the Greenpeace myth about golden rice being useless lived on. They continued to say that the problem was solvable by other means.AC: Do they have a point? Why couldn't children just be given vitamin A capsules, or other foods that contain it?
IP: The capsules are already being given through programs of the World Health Organization and charities such as Helen Keller International. They've been running the programs for 15 years, but they cost tens of millions of dollars a year. The problem is that besides the expense, you need the infrastructure to distribute the capsules. We're aiming for people who can't be reached this way, poor farmers in remote places.As for the possibility of eating foods that supply vitamin A, such as liver, leafy green vegetables, and eggs, the people we're targeting are too poor to buy them. Some kitchen garden projects provide them, but despite these interventions we still have 6,000 children dying every day. These are not enough. Our aim is to complement, not replace, these programs.AC: There's a project in Uganda and Mozambique to combat vitamin A deficiency by supplying sweet potatoes conventionally bred to contain extra beta carotene. Over two years it doubled vitamin A intake in women and children compared with those who ate conventional sweet potatoes. Could this be done with rice?
IP: Sweet potatoes naturally contain beta carotene, so you can use traditional breeding to improve the content. Rice contains no beta carotene, so it's impossible to introduce it without genetic engineering. Because the sweet potato project does not involve genetic modification, Greenpeace doesn't complain about it despite the aim being identical to ours. But the experience with sweet potatoes shows that what we're trying to achieve with rice is realistic. As soon as people get the potatoes, it improves their vitamin A status.AC: So where has the project got to now?
IP: It took a long time, but by conventional breeding we bred our new golden rice with varieties to suit individual tastes in different countries. This is now completed in the Philippines, Indonesia, India, China, Vietnam, and elsewhere in Asia.AC: Is it always golden, and what does it taste like?
IP: It always has a beautiful yellow color, and it tastes just the same as usual. Because it's an integral part of the data needed to satisfy regulation authorities, professional taste panels have also tested it.


 
ostatak je ovde

scallop

  • 5
  • 3
  • Posts: 26.658
Treba se fokusirati na ovaj deo teksta:


Ingo Potrykus is a co-inventor of golden rice,

Pronalazači su skloni da budu i vlasnici i da eksploatišu svoje vlasništvo. Po svaku cenu.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

PTY

  • 5
  • 3
  • Posts: 8.602
Pa, to se podrazumeva, a ja nalazim i da je to ne samo normalno, nego i u skladu sa generalnim zakonom o vlasnistvu: pronalazac jeste vlasnik patenta, ili je (u nekim slucajevima) vlasnik onaj ko mu je taj proces finansirao.

scallop

  • 5
  • 3
  • Posts: 26.658
Ne bih ja dalje o etici GMO proizvoda u okviru Hajzenberga.

Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
I The Economist ima šta da kaže na ime nauke koja je počela suviše da veruje, premalo da proverava:

How science goes wrong


Quote
Scientific research has changed the world. Now it needs to change itself



A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.
But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.


Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
What a load of rubbish
Even when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.
One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012—more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.
Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.
Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990. Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.
The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.
If it’s broke, fix it
All this makes a shaky foundation for an enterprise dedicated to discovering the truth about the world. What might be done to shore it up? One priority should be for all disciplines to follow the example of those that have done most to tighten standards. A start would be getting to grips with statistics, especially in the growing number of fields that sift through untold oodles of data looking for patterns. Geneticists have done this, and turned an early torrent of specious results from genome sequencing into a trickle of truly significant ones.
Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are. (It is already meant to happen in clinical trials of drugs, but compliance is patchy.) Where possible, trial data also should be open for other researchers to inspect and test.


The most enlightened journals are already becoming less averse to humdrum papers. Some government funding agencies, including America’s National Institutes of Health, which dish out $30 billion on research each year, are working out how best to encourage replication. And growing numbers of scientists, especially young ones, understand statistics. But these trends need to go much further. Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened—or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.
Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.

PTY

  • 5
  • 3
  • Posts: 8.602


Quote
And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.
Word.

PTY

  • 5
  • 3
  • Posts: 8.602
i jos: http://www.latimes.com/business/la-fi-hiltzik-20131027,0,1228881.column#axzz2iwz1aoLh



Quote

Science has lost its way, at a big cost to humanity



Researchers are rewarded for splashy findings, not for double-checking accuracy. So many scientists looking for cures to diseases have been building on ideas that aren't even true.

In today's world, brimful as it is with opinion and falsehoods masquerading as facts, you'd think the one place you can depend on for verifiable facts is science.

You'd be wrong. Many billions of dollars' worth of wrong.

A few years ago, scientists at the Thousand Oaks biotech firm Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology.

The idea was to make sure that research on which Amgen was spending millions of development dollars still held up. They figured that a few of the studies would fail the test — that the original results couldn't be reproduced because the findings were especially novel or described fresh therapeutic approaches.

But what they found was startling: Of the 53 landmark papers, only six could be proved valid.

"Even knowing the limitations of preclinical research," observed C. Glenn Begley, then Amgen's head of global cancer research, "this was a shocking result."

Unfortunately, it wasn't unique. A group at Bayer HealthCare in Germany similarly found that only 25% of published papers on which it was basing R&D projects could be validated, suggesting that projects in which the firm had sunk huge resources should be abandoned. Whole fields of research, including some in which patients were already participating in clinical trials, are based on science that hasn't been, and possibly can't be, validated.

"The thing that should scare people is that so many of these important published studies turn out to be wrong when they're investigated further," says Michael Eisen, a biologist at UC Berkeley and the Howard Hughes Medical Institute. The Economist recently estimated spending on biomedical R&D in industrialized countries at $59 billion a year. That's how much could be at risk from faulty fundamental research.

Eisen says the more important flaw in the publication model is that the drive to land a paper in a top journal — Nature and Science lead the list — encourages researchers to hype their results, especially in the life sciences. Peer review, in which a paper is checked out by eminent scientists before publication, isn't a safeguard. Eisen says the unpaid reviewers seldom have the time or inclination to examine a study enough to unearth errors or flaws.

"The journals want the papers that make the sexiest claims," he says. "And scientists believe that the way you succeed is having splashy papers in Science or Nature — it's not bad for them if a paper turns out to be wrong, if it's gotten a lot of attention."

Eisen is a pioneer in open-access scientific publishing, which aims to overturn the traditional model in which leading journals pay nothing for papers often based on publicly funded research, then charge enormous subscription fees to universities and researchers to read them.

But concern about what is emerging as a crisis in science extends beyond the open-access movement. It's reached the National Institutes of Health, which last week launched a project to remake its researchers' approach to publication. Its new PubMed Commons system allows qualified scientists to post ongoing comments about published papers. The goal is to wean scientists from the idea that a cursory, one-time peer review is enough to validate a research study, and substitute a process of continuing scrutiny, so that poor research can be identified quickly and good research can be picked out of the crowd and find a wider audience.

PubMed Commons is an effort to counteract the "perverse incentives" in scientific research and publishing, says David J. Lipman, director of NIH's National Center for Biotechnology Information, which is sponsoring the venture.

The Commons is currently in its pilot phase, during which only registered users among the cadre of researchers whose work appears in PubMed — NCBI's clearinghouse for citations from biomedical journals and online sources — can post comments and read them. Once the full system is launched, possibly within weeks, commenters still will have to be members of that select group, but the comments will be public.

Science and Nature both acknowledge that peer review is imperfect. Science's executive editor, Monica Bradford, told me by email that her journal, which is published by the American Assn. for the Advancement of Science, understands that for papers based on large volumes of statistical data — where cherry-picking or flawed interpretation can contribute to erroneous conclusions — "increased vigilance is required." Nature says that it now commissions expert statisticians to examine data in some papers.

But they both defend pre-publication peer review as an essential element in the scientific process — a "reasonable and fair" process, Bradford says.

Yet there's been some push-back by the prestige journals against the idea that they're encouraging flawed work — and that their business model amounts to profiteering. Earlier this month, Science published a piece by journalist John Bohannon about what happened when he sent a spoof paper with flaws that could have been noticed by a high school chemistry student to 304 open-access chemistry journals (those that charge researchers to publish their papers, but make them available for free). It was accepted by more than half of them.

One that didn't bite was PloS One, an online open-access journal sponsored by the Public Library of Science, which Eisen co-founded. In fact, PloS One was among the few journals that identified the fake paper's methodological and ethical flaws.

What was curious, however, was that although Bohannon asserted that his sting showed how the open-access movement was part of "an emerging Wild West in academic publishing," it was the traditionalist Science that published the most dubious recent academic paper of all.

This was a 2010 paper by then-NASA biochemist Felisa Wolfe-Simon and colleagues claiming that they had found bacteria growing in Mono Lake that were uniquely able to subsist on arsenic and even used arsenic to build the backbone of their DNA.

The publication in Science was accompanied by a breathless press release and press conference sponsored by NASA, which had an institutional interest in promoting the idea of alternative life forms. But almost immediately it was debunked by other scientists for spectacularly poor methodology and an invalid conclusion. Wolfe-Simon, who didn't respond to a request for comment last week, has defended her interpretation of her results as "viable." She hasn't withdrawn the paper, nor has Science, which has published numerous critiques of the work. Wolfe-Simon is now associated with the prestigious Lawrence Berkeley National Laboratory.

To Eisen, the Wolfe-Simon affair represents the "perfect storm of scientists obsessed with making a big splash and issuing press releases" — the natural outcome of a system in which there's no career gain in trying to replicate and validate previous work, as important as that process is for the advancement of science.

"A paper that actually shows a previous paper is true would never get published in an important journal," he says, "and it would be almost impossible to get that work funded."

However, the real threat to research and development doesn't come from one-time events like the arsenic study, but from the dissemination of findings that look plausible on the surface but don't stand up to scrutiny, as Begley and his Amgen colleagues found.

The demand for sexy results, combined with indifferent follow-up, means that billions of dollars in worldwide resources devoted to finding and developing remedies for the diseases that afflict us all is being thrown down a rathole. NIH and the rest of the scientific community are just now waking up to the realization that science has lost its way, and it may take years to get back on the right path.


Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Dalje na slične teme. Peter Higgs, koga sigurno pamtimo po takvim hitovima kao što je Higsov bozon, u Guardianu veli da ne veruje da bi njega danas ijedan univerzitet zaposlio jer nije (nikad bio) dovoljno produktivan po današnjim merilima. On smatra da se danas, zbog, jelte, manje-više tržišnog pristupa akademizmu, od naučnika očekuje da objavljuju mnogo radova a da to onda njemu ne bi nikad dalo dovoljno vremena da se zaista pošteno posveti određenoj temi. Veli i da bi ga njegov univerzitet verovatno otpustio 1980. godine, zbog niske produktivnosti, da nije bio nominovan za Nobelovu nagradu...

Peter Higgs: I wouldn't be productive enough for today's academic system



Quote

  Peter Higgs, the British physicist who gave his name to the Higgs boson, believes no university would employ him in today's academic system because he would not be considered "productive" enough.
The emeritus professor at Edinburgh University, who says he has never sent an email, browsed the internet or even made a mobile phone call, published fewer than 10 papers after his groundbreaking work, which identified the mechanism by which subatomic material acquires mass, was published in 1964.
He doubts a similar breakthrough could be achieved in today's academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said: "It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964."
Speaking to the Guardian en route to Stockholm to receive the 2013 Nobel prize for science, Higgs, 84, said he would almost certainly have been sacked had he not been nominated for the Nobel in 1980.
Edinburgh University's authorities then took the view, he later learned, that he "might get a Nobel prize – and if he doesn't we can always get rid of him".
Higgs said he became "an embarrassment to the department when they did research assessment exercises". A message would go around the department saying: "Please give a list of your recent publications." Higgs said: "I would send back a statement: 'None.' "
By the time he retired in 1996, he was uncomfortable with the new academic culture. "After I retired it was quite a long time before I went back to my department. I thought I was well out of it. It wasn't my way of doing things any more. Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough."
Higgs revealed that his career had also been jeopardised by his disagreements in the 1960s and 70s with the then principal, Michael Swann, who went on to chair the BBC. Higgs objected to Swann's handling of student protests and to the university's shareholdings in South African companies during the apartheid regime. "[Swann] didn't understand the issues, and denounced the student leaders."
He regrets that the particle he identified in 1964 became known as the "God particle".
He said: "Some people get confused between the science and the theology. They claim that what happened at Cern proves the existence of God."
An atheist since the age of 10, he fears the nickname "reinforces confused thinking in the heads of people who are already thinking in a confused way. If they believe that story about creation in seven days, are they being intelligent?"
He also revealed that he turned down a knighthood in 1999. "I'm rather cynical about the way the honours system is used, frankly. A whole lot of the honours system is used for political purposes by the government in power."
He has not yet decided which way he will vote in the referendum on Scottish independence. "My attitude would depend a little bit on how much progress the lunatic right of the Conservative party makes in trying to get us out of Europe. If the UK were threatening to withdraw from Europe, I would certainly want Scotland to be out of that."
He has never been tempted to buy a television, but was persuaded to watch The Big Bang Theory last year, and said he wasn't impressed.
 

scallop

  • 5
  • 3
  • Posts: 26.658
Ha, evo mi ga istomišljenik. Bogme, ja skoro ko Kepler. Kao priznanje posvećujem Higsu svoju kratku radioničku priču: "Pisac nepoznatog dela i Higsov bozon".
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

tomat

  • 4
  • 3
  • Posts: 5.632
Simulations back up theory that Universe is a hologram


A ten-dimensional theory of gravity makes the same predictions as standard quantum physics in fewer dimensions.[/color][/font][/size]
[/color]
[/font][/color][/size][/color]
Quote
[/size]A team of physicists has provided some of the clearest evidence yet that our Universe could be just one big projection.[/color]
[/size]

[/color][/size]In 1997, theoretical physicist Juan Maldacena proposed1[/size] that an audacious model of the Universe in which gravity arises from infinitesimally thin, vibrating strings could be reinterpreted in terms of well-established physics. The mathematically intricate world of strings, which exist in nine dimensions of space plus one of time, would be merely a hologram: the real action would play out in a simpler, flatter cosmos where there is no gravity.[/size]Maldacena's idea thrilled physicists because it offered a way to put the popular but still unproven theory of strings on solid footing — and because it solved apparent inconsistencies between quantum physics and Einstein's theory of gravity. It provided physicists with a mathematical Rosetta stone, a 'duality', that allowed them to translate back and forth between the two languages, and solve problems in one model that seemed intractable in the other and vice versa. But although the validity of Maldacena's ideas has pretty much been taken for granted ever since, a rigorous proof has been elusive.[/color]
In two papers posted on the arXiv repository, Yoshifumi Hyakutake of Ibaraki University in Japan and his colleagues now provide, if not an actual proof, at least compelling evidence that Maldacena’s conjecture is true.
In one paper[/size]
2[/size], Hyakutake computes the internal energy of a black hole, the position of its event horizon (the boundary between the black hole and the rest of the Universe), its entropy and other properties based on the predictions of string theory as well as the effects of so-called virtual particles that continuously pop into and out of existence. In the other3[/size], he and his collaborators calculate the internal energy of the corresponding lower-dimensional cosmos with no gravity. The two computer calculations match.
“It seems to be a correct computation,” says Maldacena, who is now at the Institute for Advanced Study in Princeton, New Jersey and who did not contribute to the team's work.
Regime changeThe findings “are an interesting way to test many ideas in quantum gravity and string theory”, Maldacena adds. The two papers, he notes, are the culmination of a series of articles contributed by the Japanese team over the past few years. “The whole sequence of papers is very nice because it tests the dual [nature of the universes] in regimes where there are no analytic tests.”
“They have numerically confirmed, perhaps for the first time, something we were fairly sure had to be true, but was still a conjecture — namely that the thermodynamics of certain black holes can be reproduced from a lower-dimensional universe,” says Leonard Susskind, a theoretical physicist at Stanford University in California who was among the first theoreticians to explore the idea of holographic universes.
Neither of the model universes explored by the Japanese team resembles our own, Maldacena notes. The cosmos with a black hole has ten dimensions, with eight of them forming an eight-dimensional sphere. The lower-dimensional, gravity-free one has but a single dimension, and its menagerie of quantum particles resembles a group of idealized springs, or harmonic oscillators, attached to one another.
Nevertheless, says Maldacena, the numerical proof that these two seemingly disparate worlds are actually identical gives hope that the gravitational properties of our Universe can one day be explained by a simpler cosmos purely in terms of quantum theory.
Nature doi:10.1038/nature.2013.14328
[/size][size=78%][[/size]
[/size][/color]/[/font][/color][/size][size=78%]quote][/size]
[/size]
[/size]http://www.nature.com/news/simulations-back-up-theory-that-universe-is-a-hologram-1.14328[size=78%]

edit: nešto se ovo kopipejstovanje zakomplikovalo, vidi kakav haos je ispao. nije ovo bilo ovako ranije.
Arguing on the internet is like running in the Special Olympics: even if you win, you're still retarded.

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Sto se tice bakterija, jako mi je tesko, ali moram da stanem na kuferovu stranu...
Ja sam svoje rodjene sjebao, verovatno antibioticima, i sada imam puno problema...


U poslednjem broju Economista ima dobar clanak o tome...


[size=78%]http://www.economist.com/node/21560523[/size]




Super članak!


Evo još malo o bakterijama koje nam (možda) utiču na kognitivne procese:


Body bacteria: Can your gut bugs make you smarter?
 
Quote
The bacteria in our guts can influence the working of the mind, says Frank Swain. So could they be upgraded to enhance brainpower?   
 
I have some startling news: you are not human. At least, by some counts. While you are indeed made up of billions of human cells working in remarkable concert, these are easily outnumbered by the bacterial cells that live on and in you – your microbiome. There are ten of them for every one of your own cells, and they add an extra two kilograms (4.4lbs) to your body.
Far from being freeloading passengers, many of these microbes actively help digest food and prevent infection. And now evidence is emerging that these tiny organisms may also have a profound impact on the brain too. They are a living augmentation of your body – and like any enhancement, this means they could, in principle, be upgraded. So, could you hack your microbiome to make yourself healthier, happier, and smarter too?
According to John Cryan, this isn’t as far-fetched as it sounds. As a professor of anatomy and neuroscience at University College Cork, he specialises in the relationship between the brain and the gut. One of his early experiments showed the diversity of bacteria living in the gut was greatly diminished in mice suffering from early life stress. This finding inspired him to investigate the connection between the microbiome and the brain.
The bacterial microbiota in the gut helps normal brain development, says Cryan. “If you don’t have microbiota you have major changes in brain structure and function, and then also in behaviour.” In a pioneering study, a Japanese research team showed that mice raised without any gut bacteria had an exaggerated physical response to stress, releasing more hormone than mice that had a full complement of bacteria. However, this effect could be reduced in bacteria-free mice by repopulating their gut with Bifidobacterium infantis, one of the major symbiotic bacteria found in the gut. Cryan’s team built on this finding, showing that this effect could be reproduced even in healthy mice. “We took healthy mice and fed them Lactobacillus [another common gut bacteria), and we showed that these animals had a reduced stress response and reduced anxiety-related behaviours.”
 
But why should bacteria in the gut affect the brain? There are several different ways that messages can be sent from one organ to the other. It can be hormones or immune cells via the bloodstream, or by impulses along the vagus nerve, which stretches from the brain to intertwine closely with the gut. Through these pathways, actions in one produce effects in the other.
So how might you go about altering your microbiome to do a spot of brain-hacking? Cryan’s team works on several fronts, investigating the potential to manage stress, pain, obesity and cognition through the gut. “We have unpublished data showing that probiotics can enhance learning in animal models,” he tells me. His team tested the effects of two strains of bacteria, finding that one improved cognition in mice. His team is now embarking on human trials, to see if healthy volunteers can have their cognitive abilities enhanced or modulated by tweaking the gut microbiome.
Another method of adjusting the bacterial profile of your gut is to undergo a transplant that involves taking faecal material from a donor’s intestine – often a close relative – and implanting into a recipient via enema infusion. This unorthodox treatment has been shown to successfully treat infections caused by pathogenic bacteria colonising the gut.
Brain boost
Thankfully, Cryan has a far more appetising method on offer.  “Diet is perhaps the biggest factor in shaping the composition of the microbiome,” he says. A study by University College Cork researchers published in Nature in 2012 followed 200 elderly people over the course of two years, as they transitioned into different environments such as nursing homes. The researchers found that their subjects’ health – frailty, cognition, and immune system – all correlated with their microbiome. From bacterial population alone, researchers could tell if a patient was a long-stay patient in a nursing home, or short-stay, or living in the general community. These changes were a direct reflection of their diet in these different environments. “A diverse diet gives you a diverse microbiome that gives you a better health outcome,” says Cryan.
Beyond a healthy and varied diet, though, it still remains to be discovered whether certain food combinations could alter the microbiome to produce a cognitive boost. In fact, Cryan recommends that claims from probiotic supplements of brain-boosting ought to be taken with a pinch of salt for now. “Unless the studies have been done, one can assume they’re not going to have any effect on mental health,” he says. Still, he’s optimistic about the future. “The field right now is evolving very strongly and quickly. There’s a lot of important research to be done. It’s still early days.”
Hacking the brain often conjures up ideas of electrical hardware such as implants and trans-cranial stimulators. But it might be the case that a simple change in diet can shift your brain up a gear. The transhumanists and body hackers who believe that technology is the sole way to improve human ability would do well to pay as much attention to the living augmentation that already resides in their gut.
If you would like to comment on this video or anything else you have seen on Future, head over to our Facebook or Google+ page, or message us on Twitter.
 

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
A, evo i još o tome kako istraživanja i merenja nastoje da budu varljivija nego što bismo voleli:
 
  Lab equipment may take on a mind of its own to trick scientists
 
Quote
MIT researchers propose using distant quasars to test Bell’s theorem
In a paper published this week in the journal Physical Review Letters, MIT researchers propose an experiment that may close the last major loophole of Bell’s inequality — a 50-year-old theorem that, if violated by experiments, would mean that our universe is based not on the textbook laws of classical physics, but on the less-tangible probabilities of quantum mechanics.
 
Such a quantum view would allow for seemingly counterintuitive phenomena such as entanglement, in which the measurement of one particle instantly affects another, even if those entangled particles are at opposite ends of the universe. Among other things, entanglement — a quantum feature Albert Einstein skeptically referred to as “spooky action at a distance”— seems to suggest that entangled particles can affect each other instantly, faster than the speed of light.
 
In 1964, physicist John Bell took on this seeming disparity between classical physics and quantum mechanics, stating that if the universe is based on classical physics, the measurement of one entangled particle should not affect the measurement of the other — a theory, known as locality, in which there is a limit to how correlated two particles can be. Bell devised a mathematical formula for locality, and presented scenarios that violated this formula, instead following predictions of quantum mechanics.
 
Since then, physicists have tested Bell’s theorem by measuring the properties of entangled quantum particles in the laboratory. Essentially all of these experiments have shown that such particles are correlated more strongly than would be expected under the laws of classical physics — findings that support quantum mechanics.
 
However, scientists have also identified several major loopholes in Bell’s theorem. These suggest that while the outcomes of such experiments may appear to support the predictions of quantum mechanics, they may actually reflect unknown “hidden variables” that give the illusion of a quantum outcome, but can still be explained in classical terms.
 
Though two major loopholes have since been closed, a third remains; physicists refer to it as “setting independence,” or more provocatively, “free will.” This loophole proposes that a particle detector’s settings may “conspire” with events in the shared causal past of the detectors themselves to determine which properties of the particle to measure — a scenario that, however far-fetched, implies that a physicist running the experiment does not have complete free will in choosing each detector’s setting. Such a scenario would result in biased measurements, suggesting that two particles are correlated more than they actually are, and giving more weight to quantum mechanics than classical physics.
 
“It sounds creepy, but people realized that’s a logical possibility that hasn’t been closed yet,” says MIT’s David Kaiser, the Germeshausen Professor of the History of Science and senior lecturer in the Department of Physics. “Before we make the leap to say the equations of quantum theory tell us the world is inescapably crazy and bizarre, have we closed every conceivable logical loophole, even if they may not seem plausible in the world we know today?”
 
Now Kaiser, along with MIT postdoc Andrew Friedman and Jason Gallicchio of the University of Chicago, have proposed an experiment to close this third loophole by determining a particle detector’s settings using some of the oldest light in the universe: distant quasars, or galactic nuclei, which formed billions of years ago.
 
The idea, essentially, is that if two quasars on opposite sides of the sky are sufficiently distant from each other, they would have been out of causal contact since the Big Bang some 14 billion years ago, with no possible means of any third party communicating with both of them since the beginning of the universe — an ideal scenario for determining each particle detector’s settings.
 
As Kaiser explains it, an experiment would go something like this: A laboratory setup would consist of a particle generator, such as a radioactive atom that spits out pairs of entangled particles. One detector measures a property of particle A, while another detector does the same for particle B. A split second after the particles are generated, but just before the detectors are set, scientists would use telescopic observations of distant quasars to determine which properties each detector will measure of a respective particle. In other words, quasar A determines the settings to detect particle A, and quasar B sets the detector for particle B.
 
The researchers reason that since each detector’s setting is determined by sources that have had no communication or shared history since the beginning of the universe, it would be virtually impossible for these detectors to “conspire” with anything in their shared past to give a biased measurement; the experimental setup could therefore close the “free will” loophole. If, after multiple measurements with this experimental setup, scientists found that the measurements of the particles were correlated more than predicted by the laws of classical physics, Kaiser says, then the universe as we see it must be based instead on quantum mechanics.
 
“I think it’s fair to say this [loophole] is the final frontier, logically speaking, that stands between this enormously impressive accumulated experimental evidence and the interpretation of that evidence saying the world is governed by quantum mechanics,” Kaiser says.
 
Now that the researchers have put forth an experimental approach, they hope that others will perform actual experiments, using observations of distant quasars.
 
“At first, we didn’t know if our setup would require constellations of futuristic space satellites, or 1,000-meter telescopes on the dark side of the moon,” Friedman says. “So we were naturally delighted when we discovered, much to our surprise, that our experiment was both feasible in the real world with present technology, and interesting enough to our experimentalist collaborators who actually want to make it happen in the next few years.”
   
Adds Kaiser, “We’ve said, ‘Let’s go for broke — let’s use the history of the cosmos since the Big Bang, darn it.’ And it is very exciting that it’s actually feasible.”
   

scallop

  • 5
  • 3
  • Posts: 26.658
Nisam čitao link, ali mi je poznato da laboratorijska oprema zna da bude zajebana. Posebno kad počne da daje rezultate koji se istraživaču "dopadaju".
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Albedo 0

  • Guest
pusti to, daj ove bakterije za inteligenciju! 8-)


scallop

  • 5
  • 3
  • Posts: 26.658
Ti fale?
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Albedo 0

  • Guest
od sada pijem flonivin svaki dan! 8-)

Karl Rosman

  • 4
  • 3
  • Posts: 2.801
Da izbalansiras bakterije? Sranje i inteligenciju?

But, why?
"On really romantic evenings of self, I go salsa dancing with my confusion."
"Well, I've wrestled with reality for 35 years, Doctor, and I'm happy to state I finally won over it"

Albedo 0

  • Guest
od viška ne boli dupe!

Karl Rosman

  • 4
  • 3
  • Posts: 2.801
od viška ne boli dupe!
Uh. Nisam bas siguran, ali verujem ti na rec!  :)


"On really romantic evenings of self, I go salsa dancing with my confusion."
"Well, I've wrestled with reality for 35 years, Doctor, and I'm happy to state I finally won over it"

Albedo 0

  • Guest
inače, kad već pričamo o dopingu, jel tačno da se na ETF-u koristio kokain za vrijeme ispitnog roka? 8-)


Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Dilema zašto kvantne procese ne vidimo na makroskopskom planu je dugo lomila mozgove poštenog sveta. E, sad, neki tvrde da imaju predlog rešenja:
 
 The Astounding Link Between the P≠NP Problem and the Quantum Nature of Universe
 
Quote
With some straightforward logic, one theorist has shown that macroscopic quantum objects cannot exist if P≠NP, which suddenly explains one of the greatest mysteries in physics
 
The paradox of Schrodinger’s cat is a thought experiment dreamed up to explore one of the great mysteries of quantum mechanics—why we don’t see its strange and puzzling behaviour in the macroscopic world.
The paradox is simple to state. It involves a cat, a flask of poison and a source of radiation; all contained within a sealed box. If a monitor in the box detects radioactivity, the flask is shattered, releasing the poison and killing the cat.
 
 
The paradox comes about because the radioactive decay is a quantum process and so in a superposition of states until observed. The radioactive atom is both decayed and undecayed at the same time.
But that means the cat must also be in a superposition of alive and dead states until the box is open and the system is observed. In other words, the cat must be both dead and alive at the same time.
Nobody knows why we don’t observe these kinds of strange superpositions in the macroscopic world. For some reason, quantum mechanics just doesn’t work on that scale. And therein lies the mystery, one of the greatest in science.
But that mystery may now be solved thanks to the extraordinary work of Arkady Bolotin at Ben-Gurion University in Israel. He says the key is to think of Schrodinger’s cat as a problem of computational complexity theory. When he does that, it melts away.
First some background. The equation that describes the behaviour of quantum particles is called Schrodinger’s equation. It is relatively straightforward to solve for simple systems such as a single quantum particle in a box and predicts that these systems exist in a quantum superposition of states.
In principle, it ought to be possible to use Schrödinger’s equation to describe any object regardless of its size, perhaps even the universe itself. This equation predicts that the system being modelled exists in a superposition of states, even though this is never experienced in our macroscopic world.
The problem is that the equation says nothing about how large an object needs to be before it obeys Newtonian mechanics rather than the quantum variety.
Now Bolotin thinks he knows why there is a limit and where it lies. He says there is an implicit assumption when physicists say that Schrödinger’s equation can describe macroscopic systems. This assumption is that the equations can be solved in a reasonable amount of time to produce an answer.
That’s certainly true of simple systems but physicists well know that calculating the quantum properties of more complex systems is hugely difficult. The world’s most powerful supercomputers cough and splutter when asked to handle systems consisting of more than a few thousand quantum particles.
That leads Bolotin to ask a perfectly reasonable question. What if there is no way to solve Schrödinger’s equation for macroscopic systems in a reasonable period of time? “If it were so, then quantum theoretical constructions like “a quantum state of a macroscopic object” or “the wave function of the universe” would be nothing more than nontestable empty abstractions,” he says.
He then goes on to prove that this is exactly the case, with one important proviso: that P ≠ NP. Here’s how he does it.
His first step is to restate Schrödinger’s equation as a problem of computational complexity. For a simple system, the equation can be solved by an ordinary computer in a reasonable time, so it falls into class of computational problems known as NP.
Bolotin then goes on to show that the problem of solving the Schrödinger equation is at least as hard or harder than any problem in the NP class. This makes it equivalent to many other head-scratchers such as the travelling salesman problem. Computational complexity theorists call these problems NP-hard.
What’s interesting about NP-hard problems is that they are mathematically equivalent. So a solution for one automatically implies a solution for them all. The biggest question in computational complexity theory (and perhaps in all of physics, if the computational complexity theorists are to be believed), is whether they can be solved in this way or not.
 

 The class of problems that can be solved quickly and efficiently is called P. So the statement that NP-hard problems can also be solved quickly and efficiently is the famous P=NP equation.
But since nobody has found such a solution, the general belief is that they cannot be solved in this way. Or as computational complexity theorists put it: P ≠ NP. Nobody has yet proved this, but most theorists would bet their bottom dollar that it is true.
Schrödinger’s equation has a direct bearing on this. If the equation can be quickly and efficiently solved in all cases, including for vast macroscopic states, then it must be possible to solve all other NP-hard problems in the same way. That is equivalent to saying that P=NP.
But if P is not equal to NP, as most experts believe, then there is a limit to the size the quantum system can be. Indeed, that is exactly what physicists observe.
Bolotin goes on to flesh this out with some numbers. If P ≠ NP and there is no efficient algorithm for solving Schrödinger’s equation, then there is only one way of finding a solution, which is a brute force search.
In the travelling salesman problem of finding the shortest way of visiting a number of cities, the brute force solution involves measuring the length of all permutations of routes and then seeing which is shortest. That’s straightforward for a small number of cities but rapidly becomes difficult for large numbers of them.
Exactly the same is true of Schrödinger’s equation. It’s straightforward for a small number of quantum particles but for a macroscopic system, it becomes a monster.
Macroscopic systems are made up of a number of constituent particles about equal to Avogadro’s number, which is 10^24.
So the number of elementary operations needed to exactly solve this equation would be equal to 2^10^24. That’s a big number!
To put it in context, Bolotin imagines a computer capable of solving it over a reasonable running time of, say, a year. Such a computer would need to execute each elementary operation on a timescale of the order of 10^(-3x10^23) seconds.
This time scale is so short that it is difficult to imagine. But to put it in context, Bolotin says there would be little difference between running such a computer over one year and, say, one hundred billion years (10^18 seconds), which is several times longer than the age of the universe.
What’s more, this time scale is considerably shorter than the Planck timescale, which is roughly equal to 10^-43 seconds. It’s simply not possible to measure or detect change on a scale shorter than this. So even if there was a device capable of doing this kind of calculating, there would be no way of detecting that it had done anything.
“So, unless the laws of physics (as we understand them today) were wrong, no computer would ever be able to execute [this number of] operations in any reasonable amount time,” concludes Bolotin.
In other words, macroscopic systems cannot be quantum in nature. Or as Bolotin puts it: “For anyone living in the real physical world (of limited computational resources) the Schrodinger equation will turn out to be simply unsolvable for macroscopic objects.”
That’s a fascinating piece of logic in a remarkably clear and well written paper. It also raises an interesting avenue for experiment. Physicists have become increasingly skilled at creating conditions in which ever larger objects demonstrate quantum behaviour.
The largest quantum object so far—a vibrating silicon springboard —contained around 1 trillion atoms (10^15), significantly less than Avogadro’s number. But Bolotin’s work suggests a clear size limit.
So in theory, these kinds of experiments provide a way to probe the computational limits of the universe. What’s needed, of course, is a clear prediction from his theory that allows it to be tested experimentally.
There is also a puzzle. There are well known quantum states that do contain Avogadro’s number of particles: these include superfluids, supeconductors, lasers and so on. It would be interesting to see Bolotin’s treatment of these from the point of view of computational complexity.
In these situations, all the particles occupy the same ground state, which presumably significantly reduces the complexity. But by how much? Does his approach have anything to say about how big these states can become?
Beyond that, the questions come thick and fast. What of the transition between quantum and classical states—how does that happen in terms of computational complexity? What of the collapse of stars, which are definitely classical objects, into black holes, which may be quantum ones?
And how does the universe decide whether a system is going to be quantum or not? What is the mechanism by which computational complexity exerts its influence over nature? And so on…
The computational complexity theorist Scott Aaronson has long argued that the most interesting problems in physics are intricately linked with his discipline. And Bolotin’s new work shows why. It’s just possible that computational complexity theory could be quantum physics’ next big thing.
Ref: arxiv.org/abs/1403.7686 : Computational Solution to Quantum Foundational Problems

Follow the Physics arXiv Blog by hitting the Follow button below, on Twitter at @arxivblog and now also on Facebook

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Meni je veoma teško da se ovde razaberem, ali evo nekih indicija da je brzina svetlosti možda malo manja nego što smo se slagali poslednjih stotinak godina. Pošto ima masa grafikona u tekstu, ne kopiram ga, samo link:


First Evidence Of A Correction To The Speed of Light






Father Jape

  • 4
  • 3
  • Posts: 6.964
Blijedi čovjek na tragu pervertita.
To je ta nezadrživa napaljenost mladosti.
Dušman u odsustvu Dušmana.

https://lingvistickebeleske.wordpress.com

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Uspešna teleportacija više kvantnih vrednosti fotona:


https://medium.com/the-physics-arxiv-blog/first-teleportation-of-multiple-quantum-properties-of-a-single-photon-7c1e61598565


Ovo sigurno znači da smo na KORAK od kvantne komunikacije!!!!!!!!!!!  :-| :-| :-| :-| :-| :-| :-| :-|

Mica Milovanovic

  • 8
  • 3
  • *
  • Posts: 8.220
Skote, gde si sad kad si mi najpotrebniji za teleportaciju?
Mica

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
How Could Quantum Physics Get Stranger? Shattered Wave Functions



Quote
A team of physicists based at Brown University has succeeded in shattering a quantum wave function. That near-mythical representation of indeterminate reality, in which an unmeasured particle is able to occupy many states simultaneously, can be dissected into many parts. This dissection, which is described this week in the Journal of Low Temperature Physics, has the potential to turn how we view the quantum world on its head.
 When we say some element of the quantum world occupies many states at once, what’s really being referred to is the element’s wave function. A wave function can be viewed as a space occupied simultaneously by many different possibilities or degrees of freedom.
 If a particle could be in position (x,y,z) in three-dimensional space, there are probabilities that it could specifically be at (x1,y1,z1) or (x2,y2,z2) and so forth, and this is represented in the wave function, which is all of these possibilities added together. Even what we’d normally (deterministically) consider empty space has a wave function and, as such, contains very real possibilities of not being empty. Sometimes this manifests as real “virtual” particles.
 Visually, we might imagine a particle in its undisturbed state looking more like a cloud than a point in space. Imagine tracing out all of your movements for a couple of weeks or months on a map or satellite image. It might look a bit like that cloud, only instead of describing past events, the electron cloud would describe precisely right now. Weird, eh? What makes it even cooler is that a bunch of particles can share these states at the same time, effectively becoming instances of the same particle. And so: entanglement.
 It’s possible to strip away all of this indeterminateness. To do so is actually quite easy; wave functions are very fragile, subject to a “collapse” in which all of those possibilities become just a single particle at a single point at a single time. That’s what happens when a macroscopic human attempts to measure a quantum mechanical system: The wave drops away and all that’s left is a boring, well-defined thing.


 What the Brown researchers, led by physicist Humphrey Maris, found is that it’s possible to take a wave function and isolate it into different parts. So, if our electron has some probability of being in position (x1,y1,z1) and another probability of being in position (x2,y2,z2), those two probabilities can be isolated from each other, cordoned off like quantum crime scenes. According to Maris and his team, this can be achieved (and has been achieved) using tiny bubbles of helium as physical “traps.”
 “We are trapping the chance of finding the electron, not pieces of the electron,” Maris said in a statement provided by Brown University. “It’s a little like a lottery. When lottery tickets are sold, everyone who buys a ticket gets a piece of paper. So all these people are holding a chance and you can consider that the chances are spread all over the place. But there is only one prize—one electron—and where that prize will go is determined later.”
 Maris is chasing a fairly old mystery with this work. In experiments dating back to the 1960s, physicists have observed a very peculiar behavior of electrons in supercooled baths of helium. When an electron enters the bath, it acts to repel the surrounding helium atoms, forming its own little bubble or cavity in the process. These bubbles then drift slowly downwards toward the bottom of the bath and a waiting detector. The bubbles should all be the same size and, thus, should fall at the same rate. Yet, something much stranger happens.
 The electron bubbles aren’t the first things to hit the detector in this experimental setup. Before they arrive as anticipated, the detector begins registering a series of mystery objects. This has been repeated many times over the years and explanations have included impurities in the helium bath or the possibility of electrons coupling to the helium atoms, which would then register negative charges at the detector.
 The Brown team argues that the only explanation that fits is their solution, a sort of fission reaction that breaks apart the electron wave function into roughly the same sizes of the mystery objects detected above. So, it’s not new and different objects hitting the detector first, it’s just different aspects or relics of the same electron, slivers of that cloud of possibilities. That an electron (or other particle) can be in many places at the same time is strange enough, but the notion that those possibilities can be captured and shuttled away adds a whole new twist.
 Ultimately, the wave function isn’t a physical thing. It’s mathematics that describe a phenomenon. So, it’s not as if in these extra bubbles we might find some “part” of an electron. The electron, upon measurement, will be in precisely one bubble. Or that would be the case if we assume the helium itself isn’t capable of causing the same sort of disturbance as a human measurement, causing its own wave function collapse with its own sort of “measurement.”
 “No one is sure what actually constitutes a measurement,” Maris said. “Perhaps physicists can agree that someone with a Ph.D. wearing a white coat sitting in the lab of a famous university can make measurements. But what about somebody who really isn’t sure what they are doing? Is consciousness required? We don’t really know.”

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Ovo možda i nije idealan topik za ovaj članak al eto ga:



2 Futures Can Explain Time's Mysterious Past





Quote
New theories suggest the big bang was not the beginning, and that we may live in the past of a parallel universe


Physicists have a problem with time.
 
 Whether through Newton’s gravitation, Maxwell’s electrodynamics, Einstein’s special and general relativity or quantum mechanics, all the equations that best describe our universe work perfectly if time flows forward or backward.
 
 Of course the world we experience is entirely different. The universe is expanding, not contracting. Stars emit light rather than absorb it, and radioactive atoms decay rather than reassemble. Omelets don’t transform back to unbroken eggs and cigarettes never coalesce from smoke and ashes. We remember the past, not the future, and we grow old and decrepit, not young and rejuvenated. For us, time has a clear and irreversible direction. It flies forward like a missile, equations be damned.
 
 For more than a century, the standard explanation for “time’s arrow,” as the astrophysicist Arthur Eddington first called it in 1927, has been that it is an emergent property of thermodynamics, as first laid out in the work of the 19th-century Austrian physicist Ludwig Boltzmann. In this view what we perceive as the arrow of time is really just the inexorable rearrangement of highly ordered states into random, useless configurations, a product of the universal tendency for all things to settle toward equilibrium with one another.
 
 Informally speaking, the crux of this idea is that “things fall apart,” but more formally, it is a consequence of the second law of thermodynamics, which Boltzmann helped devise. The law states that in any closed system (like the universe itself), entropy—disorder—can only increase. Increasing entropy is a cosmic certainty because there are always a great many more disordered states than orderly ones for any given system, similar to how there are many more ways to scatter papers across a desk than to stack them neatly in a single pile.
 
 The thermodynamic arrow of time suggests our observable universe began in an exceptionally special state of high order and low entropy, like a pristine cosmic egg materializing at the beginning of time to be broken and scrambled for all eternity. From Boltzmann’s era onward, scientists allergic to the notion of such an immaculate conception have been grappling with this conundrum.
 
 Boltzmann, believing the universe to be eternal in accordance with Newton’s laws, thought that eternity could explain a low-entropy origin for time’s arrow. Given enough time—endless time, in fact—anything that can happen will happen, including the emergence of a large region of very low entropy as a statistical fluctuation from an ageless, high-entropy universe in a state of near-equilibrium. Boltzmann mused that we might live in such an improbable region, with an arrow of time set by the region’s long, slow entropic slide back into equilibrium.
 
 Today’s cosmologists have a tougher task, because the universe as we now know it isn’t ageless and unmoving: They have to explain the emergence of time’s arrow within a dynamic, relativistic universe that apparently began some 14 billion years ago in the fiery conflagration of the big bang. More often than not the explanation involves ‘fine-tuning’—the careful and arbitrary tweaking of a theory’s parameters to accord with observations.
 
 Many of the modern explanations for a low-entropy arrow of time involve a theory called inflation—the idea that a strange burst of antigravity ballooned the primordial universe to an astronomically larger size, smoothing it out into what corresponds to a very low-entropy state from which subsequent cosmic structures could emerge. But explaining inflation itself seems to require even more fine-tuning. One of the problems is that once begun, inflation tends to continue unstoppably. This “eternal inflation” would spawn infinitudes of baby universes about which predictions and observations are, at best, elusive. Whether this is an undesirable bug or a wonderful feature of the theory is a matter of fierce debate; for the time being it seems that inflation’s extreme flexibility and explanatory power are both its greatest strength and its greatest weakness.
 
 For all these reasons, some scientists seeking a low-entropy origin for time’s arrow find explanations relying on inflation slightly unsatisfying. “There are many researchers now trying to show in some natural way why it’s reasonable to expect the initial entropy of the universe to be very low,” says David Albert, a philosopher and physicist at Columbia University. “There are even some who think that the entropy being low at the beginning of the universe should just be added as a new law of physics.”
 
 That latter idea is tantamount to despairing cosmologists simply throwing in the towel. Fortunately, there may be another way.
 
 Tentative new work from Julian Barbour of the University of Oxford, Tim Koslowski of the University of New Brunswick and Flavio Mercati of the Perimeter Institute for Theoretical Physics suggests that perhaps the arrow of time doesn’t really require a fine-tuned, low-entropy initial state at all but is instead the inevitable product of the fundamental laws of physics. Barbour and his colleagues argue that it is gravity, rather than thermodynamics, that draws the bowstring to let time’s arrow fly. Their findings were published in October in Physical Review Letters.
 
 The team’s conclusions come from studying an exceedingly simple proxy for our universe, a computer simulation of 1,000 pointlike particles interacting under the influence of Newtonian gravity. They investigated the dynamic behavior of the system using a measure of its "complexity," which corresponds to the ratio of the distance between the system’s closest pair of particles and the distance between the most widely separated particle pair. The system’s complexity is at its lowest when all the particles come together in a densely packed cloud, a state of minimum size and maximum uniformity roughly analogous to the big bang. The team’s analysis showed that essentially every configuration of particles, regardless of their number and scale, would evolve into this low-complexity state. Thus, the sheer force of gravity sets the stage for the system’s expansion and the origin of time’s arrow, all without any delicate fine-tuning to first establish a low-entropy initial condition.
 
 From that low-complexity state, the system of particles then expands outward in both temporal directions, creating two distinct, symmetric and opposite arrows of time. Along each of the two temporal paths, gravity then pulls the particles into larger, more ordered and complex structures—the model’s equivalent of galaxy clusters, stars and planetary systems. From there, the standard thermodynamic passage of time can manifest and unfold on each of the two divergent paths. In other words, the model has one past but two futures. As hinted by the time-indifferent laws of physics, time’s arrow may in a sense move in two directions, although any observer can only see and experience one. “It is the nature of gravity to pull the universe out of its primordial chaos and create structure, order and complexity,” Mercati says. “All the solutions break into two epochs, which go on forever in the two time directions, divided by this central state which has very characteristic properties.”
 
 Although the model is crude, and does not incorporate either quantum mechanics or general relativity, its potential implications are vast. If it holds true for our actual universe, then the big bang could no longer be considered a cosmic beginning but rather only a phase in an effectively timeless and eternal universe. More prosaically, a two-branched arrow of time would lead to curious incongruities for observers on opposite sides. “This two-futures situation would exhibit a single, chaotic past in both directions, meaning that there would be essentially two universes, one on either side of this central state,” Barbour says. “If they were complicated enough, both sides could sustain observers who would perceive time going in opposite directions. Any intelligent beings there would define their arrow of time as moving away from this central state. They would think we now live in their deepest past.”
 
 What’s more, Barbour says, if gravitation does prove to be fundamental to the arrow of time, this could sooner or later generate testable predictions and potentially lead to a less “ad hoc” explanation than inflation for the history and structure of our observable universe.
 
 This is not the first rigorous two-futures solution for time’s arrow. Most notably, California Institute of Technology cosmologist Sean Carroll and a graduate student, Jennifer Chen, produced their own branching model in 2004, one that sought to explain the low-entropy origin of time’s arrow in the context of cosmic inflation and the creation of baby universes. They attribute the arrow of time’s emergence in their model not so much to entropy being very low in the past but rather to entropy being so much higher in both futures, increased by the inflation-driven creation of baby universes.
 
 A decade on, Carroll is just as bullish about the prospect that increasing entropy alone is the source for time’s arrow, rather than other influences such as gravity. “Everything that happens in the universe to distinguish the past from the future is ultimately because the entropy is lower in one direction and higher in the other,” Carroll says. “This paper by Barbour, Koslowski and Mercati is good because they roll up their sleeves and do the calculations for their specific model of particles interacting via gravity, but I don’t think it’s the model that is interesting—it’s the model’s behavior being analyzed carefully…. I think basically any time you have a finite collection of particles in a really big space you’ll get this kind of generic behavior they describe. The real question is, is our universe like that? That’s the hard part.”
 
 Together with Alan Guth, the Massachusetts Institute of Technology cosmologist who pioneered the theory of inflation, Carroll is now working on a thermodynamic response of sorts to the new claims for a gravitational arrow of time: Another exceedingly simple particle-based model universe that also naturally gives rise to time’s arrow, but without the addition of gravity or any other forces. The thermodynamic secret to the model’s success, they say, is assuming that the universe has an unlimited capacity for entropy.
 
 “If we assume there is no maximum possible entropy for the universe, then any state can be a state of low entropy,” Guth says. “That may sound dumb, but I think it really works, and I also think it’s the secret of the Barbour et al construction. If there’s no limit to how big the entropy can get, then you can start anywhere, and from that starting point you’d expect entropy to rise as the system moves to explore larger and larger regions of phase space. Eternal inflation is a natural context in which to invoke this idea, since it looks like the maximum possible entropy is unlimited in an eternally inflating universe.”
 
 The controversy over time’s arrow has come far since the 19th-century ideas of Boltzmann and the 20th-century notions of Eddington, but in many ways, Barbour says, the debate at its core remains appropriately timeless. “This is opening up a completely new way to think about a fundamental problem, the nature of the arrow of time and the origin of the second law of thermodynamics,” Barbour says. “But really we’re just investigating a new aspect of Newton’s gravitation, which hadn’t been noticed before. Who knows what might flow from this with further work and elaboration?”
 
 “Arthur Eddington coined the term ‘arrow of time,’ and famously said the shuffling of material and energy is the only thing which nature cannot undo,” Barbour adds. “And here we are, showing beyond any doubt really that this is in fact exactly what gravity does. It takes systems that look extraordinarily disordered and makes them wonderfully ordered. And this is what has happened in our universe. We are realizing the ancient Greek dream of order out of chaos.”

Rilejtid riding:


http://eddiesblogonenergyandphysics.blogspot.com/2014/03/what-is-cause-of-arrow-of-time.html

http://www.nariphaltan.org/time.pdf

http://www.nariphaltan.org/origin.pdf

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
The Paradoxes That Threaten To Tear Modern Cosmology Apart
 
 
Quote
Revolutions in science often come from the study of seemingly unresolvable paradoxes. An intense focus on these paradoxes, and their eventual resolution, is a process that has leads to many important breakthroughs.
So an interesting exercise is to list the paradoxes associated with current ideas in science. It’s just possible that these paradoxes will lead to the next generation of ideas about the universe.
Today, Yurij Baryshev at St Petersburg State University in Russia does just this with modern cosmology. The result is a list of paradoxes associated with well-established ideas and observations about the structure and origin of the universe.
Perhaps the most dramatic, and potentially most important, of these paradoxes comes from the idea that the universe is expanding, one of the great successes of modern cosmology. It is based on a number of different observations.
The first is that other galaxies are all moving away from us. The evidence for this is that light from these galaxies is red-shifted. And the greater the distance, the bigger this red-shift.
Astrophysicists interpret this as evidence that more distant galaxies are travelling away from us more quickly. Indeed, the most recent evidence is that the expansion is accelerating.
What’s curious about this expansion is that space, and the vacuum associated with it, must somehow be created in this process. And yet how this can occur is not at all clear. “The creation of space is a new cosmological phenomenon, which has not been tested yet in physical laboratory,” says Baryshev.
What’s more, there is an energy associated with any given volume of the universe. If that volume increases, the inescapable conclusion is that this energy must increase as well. And yet physicists generally think that energy creation is forbidden.
Baryshev quotes the British cosmologist, Ted Harrison, on this topic: “The conclusion, whether we like it or not, is obvious: energy in the universe is not conserved,” says Harrison.
This is a problem that cosmologists are well aware of. And yet ask them about it and they shuffle their feet and stare at the ground. Clearly, any theorist who can solve this paradox will have a bright future in cosmology.
The nature of the energy associated with the vacuum is another puzzle. This is variously called the zero point energy or the energy of the Planck vacuum and quantum physicists have spent some time attempting to calculate it.
These calculations suggest that the energy density of the vacuum is huge, of the order of 10^94 g/cm^3. This energy, being equivalent to mass, ought to have a gravitational effect on the universe.
Cosmologists have looked for this gravitational effect and calculated its value from their observations (they call it the cosmological constant). These calculations suggest that the energy density of the vacuum is about 10^-29 g/cm3.
Those numbers are difficult to reconcile. Indeed, they differ by 120 orders of magnitude. How and why this discrepancy arises is not known and is the cause of much bemused embarrassment among cosmologists.
Then there is the cosmological red-shift itself, which is another mystery. Physicists often talk about the red-shift as a kind of Doppler effect, like the change in frequency of a police siren as it passes by.
The Doppler effect arises from the relative movement of different objects. But the cosmological red-shift is different because galaxies are stationary in space. Instead, it is space itself that cosmologists think is expanding.
The mathematics that describes these effects is correspondingly different as well, not least because any relative velocity must always be less than the speed of light in conventional physics. And yet the velocity of expanding space can take any value.
Interestingly, the nature of the cosmological red-shift leads to the possibility of observational tests in the next few years. One interesting idea is that the red-shifts of distant objects must increase as they get further away. For a distant quasar, this change may be as much as one centimetre per second per year, something that may be observable with the next generation of extremely large telescopes.
One final paradox is also worth mentioning. This comes from one of the fundamental assumptions behind Einstein’s theory of general relativity—that if you look at the universe on a large enough scale, it must be the same in all directions.
It seems clear that this assumption of homogeneity does not hold on the local scale. Our galaxy is part of a cluster known as the Local Group which is itself part of a bigger supercluster.
This suggests a kind of fractal structure to the universe. In other words, the universe is made up of clusters regardless of the scale at which you look at it.
The problem with this is that it contradicts one of the basic ideas of modern cosmology—the Hubble law. This is the observation that the cosmological red-shift of an object is linearly proportional to its distance from Earth.
It is so profoundly embedded in modern cosmology that most currently accepted theories of universal expansion depend on its linear nature. That’s all okay if the universe is homogeneous (and therefore linear) on the largest scales.
But the evidence is paradoxical. Astrophysicists have measured the linear nature of the Hubble law at distances of a few hundred megaparsecs. And yet the clusters visible on those scales indicate the universe is not homogeneous on the scales.
And so the argument that the Hubble law’s linearity is a result of the homogeneity of the universe (or vice versa) does not stand up to scrutiny. Once again this is an embarrassing failure for modern cosmology.
It is sometimes tempting to think that astrophysicists have cosmology more or less sewn up, that the Big Bang model, and all that it implies, accounts for everything we see in the cosmos.
Not even close. Cosmologists may have successfully papered over the cracks in their theories in a way that keeps scientists happy for the time being. This sense of success is surely an illusion.
And that is how it should be. If scientists really think they are coming close to a final and complete description of reality, then a simple list of paradoxes can do a remarkable job of putting feet firmly back on the ground.
Ref: arxiv.org/abs/1501.01919 : Paradoxes Of Cosmological Physics In The Beginning Of The 21-St Century

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Prelepo!

 The Quantum Experiment That Simulates A Time Machine
 
 
Quote
Physicists have simulated a photon interacting with an older version of itself in an experiment that could help reconcile quantum mechanics and relativity
One of the curiosities of general relativity is that it seems to allow time travel. Various physicists have discovered solutions to Einstein’s field equations that contain loops that return to the same point in space and time. Physicists call them closed time-like curves.
At first glance, these kinds of time machines seem to lead to all kinds of problems, such as the grandfather paradox. This is where somebody travels back in time and kills their grandfather meaning they could never have been born and so could not have gone back to kill the grandfather.
That’s just bizarre so physicists have attempted to find ways to prevent these paradoxes. In the early 90s, for example, cosmologists showed that a billiard ball entering a wormhole that leads to a closed time-like curve must always meet its older self coming out of the wormhole. What’s more, the resulting collision always prevents the ball entering the wormhole in the first place. In other words, the billiard ball would simply bounce off the entrance to a closed time-like curve.
So much for classical objects and time travel. But what would happen if a quantum particle entered a closed time-like curve? In the early 90s, the physicist David Deutsch showed that not only is this possible but that it can only happen in a way that does not allow superluminal signalling. So quantum mechanics plays havoc with causality but in a way that is consistent with relativity and so prevents grandfather-type paradoxes.
Deutsch’s result has extraordinary implications. It implies that closed time-like curves can be used to solve NP-complete problems in polynomial time and to violate Heisenberg’s uncertainty principle.As far as we can tell, nobody has ever created a Deutsch closed time-like curve. So it’s easy to imagine that until we do, we will never know whether Deutsch’s predictions are true. But today Martin Ringbauer and a few pals at the University of Queensland in Australia say that it’s not necessary to create a closed time -like curve to test how it behaves.
Instead, these guys have created a quantum system that reproduces the behaviour of a photon passing through a closed time-like curve and interacting with its older self. In other words, these guys have built a time machine simulator.
That is not quite as far-fetched as it sounds. Physicists have long known that one quantum system can be used to simulate another. In fact, an emerging area of quantum science is devoted to this practice. “Although no closed time-like curves have been discovered to date, quantum simulation nonetheless enables us to study their unique properties and behaviour,” say Ringbauer and co.
The quantum system that they want to simulate is straightforward to describe. It consists of a photon interacting with an older version of itself. That’s equivalent to a single photon interacting with another trapped in a closed time-like curve.
That turns out to be straightforward to simulate using a pair of entangled photons. These are photon pairs created from a single photon and so therefore share the same existence in the form of a wave function.
Ringbauer and co send these photons through an optical circuit which gives them arbitrary polarisation states and then allows them to interfere when they hit a partially polarising beam splitter. By carefully setting the experimental parameters, this entangled system can simulate the behaviour of a photon interacting with an older version of itself.
The result of this interaction can be determined by detecting the pattern of photons that emerges from the beam splitter.
The results make for interesting reading. Ringbauer and co say they can use the system to distinguish between quantum states that are prepared in seemingly identical ways, something that is otherwise not possible. They can also use the time machine simulator to tell apart quantum states that are ordinarily impossible to distinguish.But perhaps most significant is that all their observations are compatible with relativity. At no point does the time machine-simulator lead to grandfather-type paradoxes, regardless of the tricks it plays with causality. That’s just as Deutsch predicted.
There are some curious wrinkles in these results too. For example, Ringbauer and co say that quantum inputs can change the output in a non-linear way but only for some experimental set ups. In other words, they can control the way the experiment twists causality, which is an interesting avenue for exploring just how far it is possible to distort cause and effect.
That’s a fascinating experiment which leads to some tantalising new ways to probe the link between quantum mechanics and relativity. As Ringbauer and co conclude: “Our study of the Deutsch model provides insights into the role of causal structures and non-linearities in quantum mechanics, which is essential for an eventual reconciliation with general relativity.”
There’s plenty more work to be done here, even before they fire up their Delorean. Worth watching.
Ref: arxiv.org/abs/1501.05014 : Experimental Simulation of Closed Timelike Curves

Follow the Physics arXiv Blog on Twitter at @arxivblog and on Facebook

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Još malo kvantne fizike:
 Quantum ‘spookiness’ passes toughest test yet
 
Quote
Experiment plugs loopholes in previous demonstrations of 'action at a distance', against Einstein's objections — and could make data encryption safer.
 
 
It’s a bad day both for Albert Einstein and for hackers. The most rigorous test of quantum theory ever carried out has confirmed that the ‘spooky action at a distance’ that the German physicist famously hated — in which manipulating one object instantaneously seems to affect another, far away one — is an inherent part of the quantum world.
                                                            
The experiment, performed in the Netherlands, could be the final nail in the coffin for models of the atomic world that are more intuitive than standard quantum mechanics, say some physicists. It could also enable quantum engineers to develop a new suite of ultrasecure cryptographic devices.
                                                            
“From a fundamental point of view, this is truly history-making,” says Nicolas Gisin, a quantum physicist at the University of Geneva in Switzerland.
Einstein's annoyance
 
In quantum mechanics, objects can be in multiple states simultaneously: for example, an atom can be in two places, or spin in opposite directions, at once. Measuring an object forces it to snap into a well-defined state. Furthermore, the properties of different objects can become ‘entangled’, meaning that their states are linked: when a property of one such object is measured, the properties of all its entangled twins become set, too.
 This idea galled Einstein because it seemed that this ghostly influence would be transmitted instantaneously between even vastly separated but entangled particles — implying that it could contravene the universal rule that nothing can travel faster than the speed of light. He proposed that quantum particles do have set properties before they are measured, called hidden variables. And even though those variable cannot be access, he suggested that they pre-program entangled particles to behave in correlated ways.
In the 1960s, Irish physicist John Bell proposed a test that could discriminate between Einstein’s hidden variables and the spooky interpretation of quantum mechanics1. He calculated that hidden variables can explain correlations only up to some maximum limit. If that level is exceeded, then Einstein’s model must be wrong.
The first Bell test was carried out in 19812, by Alain Aspect’s team at the Institute of Optics in Palaiseau, France. Many more have been performed since, always coming down on the side of spookiness — but each of those experiments has had loopholes that meant that physicists have never been able to fully close the door on Einstein’s view. Experiments that use entangled photons are prone to the ‘detection loophole’: not all photons produced in the experiment are detected, and sometimes as many as 80% are lost. Experimenters therefore have to assume that the properties of the photons they capture are representative of the entire set.
To get around the detection loophole, physicists often use particles that are easier to keep track of than photons, such as atoms. But it is tough to separate distant atoms apart without destroying their entanglement. This opens the ‘communication loophole’: if the entangled atoms are too close together, then, in principle, measurements made on one could affect the other without violating the speed-of-light limit.
   Entanglement swapping                                                            In the latest paper3, which was submitted to the arXiv preprint repository on 24 August and has not yet been peer reviewed, a team led by Ronald Hanson of Delft University of Technology reports the first Bell experiment that closes both the detection and the communication loopholes. The team used a cunning technique called entanglement swapping to combine the benefits of using both light and matter. The researchers started with two unentangled electrons sitting in diamond crystals held in different labs on the Delft campus, 1.3 kilometres apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other — and this caused both their partner electrons to become entangled, too.
This did not work every time. In total, the team managed to generate 245 entangled pairs of electrons over the course of nine days. The team's measurements exceeded Bell’s bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too.
“It is a truly ingenious and beautiful experiment,” says Anton Zeilinger, a physicist at the Vienna Centre for Quantum Science and Technology.
“I wouldn’t be surprised if in the next few years we see one of the authors of this paper, along with some of the older experiments, Aspect’s and others, named on a Nobel prize,” says Matthew Leifer, a quantum physicist at the Perimeter Institute in Waterloo for Theoretical Physics, Ontario. “It’s that exciting.”
A loophole-free Bell test also has crucial implications for quantum cryptography, says Leifer. Companies already sell systems that use quantum mechanics to block eavesdroppers. The systems produce entangled pairs of photons, sending one photon in each pair to the first user and the other photon to the second user. The two users then turn these photons into a cryptographic key that only they know. Because observing a quantum system disrupts its properties, if someone tries to eavesdrop on this process it will produce a noticeable effect, setting off an alarm.
   The final chink                                                            But loopholes, and the detection loophole in particular, leave the door open to sophisticated eavesdroppers. Through this loophole, malicious companies could sell devices that fool users into thinking that they are getting quantum-entangled particles, while they are instead being given keys that the company can use to spy on them. In 1991, quantum physicist Artur Ekert observed4 that integrating a Bell test into the cryptographic system also would ensure that the system uses a genuine quantum process. For this to be valid, however, the Bell test must be free of any loopholes that a hacker could exploit. The Delft experiment “is the final proof that quantum cryptography can be unconditionally secure”, Zeilinger says.
In practice, however, the entanglement-swapping idea will be hard to implement. The team took more than week to generate a few hundred entangled electron pairs, whereas generating a quantum key would require thousands of bits to be processed per minute, points out Gisin, who is a co-founder of the quantum cryptographic company ID Quantique in Geneva.
Zeilinger also notes that there remains one last, somewhat philosophical loophole, first identified by Bell himself: the possibility that hidden variables could somehow manipulate the experimenters’ choices of what properties to measure, tricking them into thinking quantum theory is correct.
Leifer is less troubled by this ‘freedom-of-choice loophole’, however. “It could be that there is some kind of superdeterminism, so that the choice of measurement settings was determined at the Big Bang,” he says. “We can never prove that is not the case, so I think it’s fair to say that most physicists don’t worry too much about this.”
   Nature doi:10.1038/nature.2015.18255

mac

  • 3
  • Posts: 10.019
    • http://www.facebook.com/mihajlo.cvetanovic
Ovaj poslednji ‘freedom-of-choice loophole’ se lepo slaže sa drugim tvojim skorašnjim postom o obilju studija koje niko ne može da reprodukuje.

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Da, to sam i ja pomislio kačeći ga. Al, opet, neka ga ovde.

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
'Zeno effect' verified: Atoms won't move while you watch



Quote
One of the oddest predictions of quantum theory – that a system can’t change while you’re watching it – has been confirmed in an experiment by Cornell physicists. Their work opens the door to a fundamentally new method to control and manipulate the quantum states of atoms and could lead to new kinds of sensors.
The experiments were performed in the Utracold Lab of Mukund Vengalattore, assistant professor of physics, who has established Cornell’s first program to study the physics of materials cooled to temperatures as low as .000000001 degree above absolute zero. The work is described in the Oct. 2 issue of the journal Physical Review Letters
Graduate students Yogesh Patil and Srivatsan Chakram created and cooled a gas of about a billion Rubidium atoms inside a vacuum chamber and suspended the mass between laser beams. In that state the atoms arrange in an orderly lattice just as they would in a crystalline solid. But at such low temperatures the atoms can “tunnel” from place to place in the lattice. The famous Heisenberg uncertainty principle says that position and velocity of a particle are related and cannot be simultaneously measured precisely. Temperature is a measure of a particle’s motion. Under extreme cold velocity is almost zero, so there is a lot of flexibility in position; when you observe them, atoms are as likely to be in one place in the lattice as another.
The researchers demonstrated that they were able to suppress quantum tunneling merely by observing the atoms. This so-called “Quantum Zeno effect,” named for a Greek philosopher, derives from a proposal in 1977 by E.C. George Sudarshan and Baidyanath Misra at the University of Texas, Austin, who pointed out that the weird nature of quantum measurements allows, in principle, for a quantum system to be “frozen” by repeated measurements.
Previous experiments have demonstrated the Zeno effect with the “spins” of subatomic particles. “This is the first observation of the Quantum Zeno effect by real space measurement of atomic motion,” Vengalattore said. “Also, due to the high degree of control we’ve been able to demonstrate in our experiments, we can gradually ‘tune’ the manner in which we observe these atoms. Using this tuning, we’ve also been able to demonstrate an effect called ‘emergent classicality’ in this quantum system.” Quantum effects fade, and atoms begin to behave as expected under classical physics.
The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser. A light microscope can’t see individual atoms, but the imaging laser causes them to fluoresce, and the microscope captured the flashes of light. When the imaging laser was off, or turned on only dimly, the atoms tunneled freely. But as the imaging beam was made brighter and measurements made more frequently, the tunneling reduced dramatically.
“This gives us an unprecedented tool to control a quantum system, perhaps even atom by atom,” said Patil, lead author of the paper. Atoms in this state are extremely sensitive to outside forces, he noted, so this work could lead to the development of new kinds of sensors.
The experiments were made possible by the group’s invention of a novel imaging technique that made it possible to observe ultracold atoms while leaving them in the same quantum state. “It took a lot of dedication from these students and it has been amazing to see these experiments be so successful,” Vengalattore said. “We now have the unique ability to control quantum dynamics purely by observation.”
The popular press has drawn a parallel with the “weeping angels” depicted in the “Dr. Who” television series – alien creatures who look like statues and can’t move as long as you’re looking at them. There may be some sense to that. In the quantum world, the folk wisdom really is true: “A watched pot never boils.”
The research was supported by the Army Research Office, the Defense Advanced Research Projects Agency under its QuASAR program and the National Science Foundation.
Recent Ph.D. graduate Chakram is now at the University of Chicago.

Ukronija

  • Guest
Pa da ali zasto? Zasto??  :lol:

Meho Krljic

  • 5
  • 3
  • Posts: 48.363
Šta zašto? Zašto se sistem ne menja dok ga gledaš? Pa dosta je to podrobno objašnjeno u kvantnoj teoriji, ovo je samo eksperimentalna potvrda te teorije (ne ni prva po redu). E, sad, pošto niko od nas nije kvantni fizičar, onda ne treba očekivati da nam je odmah i intuitivno jasno zašto. Evo nekih pojednostavljenih pojašnjenja napravljenih za nas, laike:



http://youtu.be/7u_UQG1La1o


Quote
Let me explain this for the clueless/confused: When he says "observe" he doesn't mean a human/conscious being observing. In order to observe, you have to, directly or indirectly, interact with the thing you're observing, right? The machine is interacting with the electron due to that observing and that is what is collapsing the "wave form of the electron" into the "particle form of the electron". By observing/not observing the electron and thus interacting with the electron you are forcing it to make up it's mind of whether it's here or there. The wave is a wave of the probability of finding the electron there so when you are observing/interacting with the electron it is more likely to be found at the peaks of the wave and less likely to be found at the troughs of the wave.