• Welcome to ZNAK SAGITE — više od fantastike — edicija, časopis, knjižara....

živimo SF

Started by zakk, 24-01-2009, 02:17:12

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.


Father Jape

http://languagelog.ldc.upenn.edu/nll/?p=21750

Emoji Dick is a crowd sourced and crowd funded translation of Herman Melville's Moby Dick into Japanese emoticons called emoji.

Each of the book's approximately 10,000 sentences has been translated three times by a Amazon Mechanical Turk worker. These results have been voted upon by another set of workers, and the most popular version of each sentence has been selected for inclusion in this book.

In total, over eight hundred people spent approximately 3,795,980 seconds working to create this book. Each worker was paid five cents per translation and two cents per vote per translation.

The funds to pay the Amazon Turk workers and print the initial run of this book were raised from eighty-three people over the course of thirty days using the funding platform Kickstarter.
Blijedi čovjek na tragu pervertita.
To je ta nezadrživa napaljenost mladosti.
Dušman u odsustvu Dušmana.

Linkin

Živimo SF i to prilično bukvalno:

Danas je 21 oktobar 2015, što je upravo onaj dan u budućnosti koji su posetili akteri filmskog SF serijala Povratak u budućnost! Scary eh? :|

! No longer available

mac

Majkl Džej Foks danas dobio svoje samovezujuće najke. Ostali primerci idu na aukciju, a prihod u njegov fond za istraživanje Parkinsonove bolesti.

http://gizmodo.com/as-promised-nike-finally-reveals-sneakers-with-powered-1737898278

Meho Krljic

Feature: The bizarre reactor that might save nuclear fusion 

Kliknite da čitate, jer ima slika i grafikona. Alternativno, evo video: 

http://youtu.be/u-fbBRAxJNk

Meho Krljic

2,000 year old 'computer' discovered: How tech and shipwrecks are rewriting human history   
Quote
Using robots, underwater iPads, 3D printing, and other new tech, scientists are discovering shipwrecks that are rewriting our history. Read the inside story of the Antikythera and two other breakthrough explorations.
Under oceans across the world, hundreds of shipwrecks lie silent and forgotten. Having set sail to discover, trade, or wage war, the boats never reached safe harbour and exist now as time capsules beneath the waves.
          When they took to the seas, some of these vessels were the state of the art, laden with some of the most advanced technology of their era. Now, thanks to the most advanced tech of our time, some long-sought wrecks are finally being found and explored for the first time.
TechRepublic talked to the teams behind some of the most high-profile shipwrecks to be discovered in recent years to find out how they've located the ships and uncovered their secrets—including a 2,000 year old device that may have been the world's first computer. The AntikytheraIf you thought the computer era started with the Colussus, or even with Babbage's designs, you'd be wrong. The advent of computing began before the birth of Christ, with a small bronze mechanism that was lost under the sea off Crete for over a thousand years.
Thought to have been built at the end of the second century BCE, the Antikythera mechanism is considered the first programmable computer. Thanks to an intricate series of gears and dials, the mechanism could be used as a calendar, to track the phases of the moon, and to predict eclipses. It's an object out of time: no other artefact as complex was built during the thousand years after the mechanism's creation—that we know of.
The Antikythera mechanism was named after the shipwreck on which it was discovered. Having sunk to the bottom of the sea in the first century BCE taking the mechanism with it, the shipwreck lay undisturbed until 1900, when a group of Greek sponge divers discovered it and began bringing its treasures to the surface.
After the death of one diver and two others becoming paralysed, operations to recover the artefacts were brought to a halt, but not before statues, ceramics, and the mechanism itself were brought up.
In 1953 and 1976, marine explorer Jacques Cousteau led the next expeditions to the wreck, bringing an assortment of objects, including more statues, coins, and gemstones. Due to the depth of the wreck and the diving technology of the period, divers could only spend a handful of minutes investigating the ship at a time or risk the bends that proved fatal to the first expedition.
Now, after time and technology has moved on, the Greek government invited a team from the Woods Hole Oceanographic Institution (WHOI), headed by Dr. Brendan Foley, to begin the first significant excavation of the wreck since the Frenchman's over 40 years ago. If Cousteau and his team made sprints to the Antikythera, the WHOI exploration is set to be more of a marathon. "We've been taking this steady incremental approach to the shipwreck, building the foundation of knowledge about it, then posing specific research questions, trying to answer them, and seeing what the next phase brings. When we first got to Antikythera in 2012, one of the questions we had was, does the island hold a whole lot of submerged cultural resources or is this the only shipwreck out there?" Foley said.
Investigators had only scratched the surface of the Antikythera in the last nearly two thousand years. A second wreck—mentioned in passing by Cousteau's team but never really explored—had been keeping the first, better-explored ship company all these years, practically untouched.
Foley team set about circumnavigating the island of Antikythera, off whose coast the wreck lay, carrying out technical dives over a period of eight days, where they mapped everything human-made from the sea's surface down to its floor, 45 meters below.
When Cousteau's team had spotted the second wreck, they saw amphorae that looked probably Roman in origin—meaning the wreck could date from any time up to the fourth century BCE.
"We were the first archaeologists to see this [second] site and immediately we recognised that it had the exact same ceramics as the treasure wreck just up the coast" where the mechanism had been found, said Foley. The similarities between the two wrecks raised questions. Was the second wreck, dubbed Antikythera B, another ship that had sunk around the same time as the first Antikythera wreck? A second ship travelling in convoy with the Antikythera? Or something else entirely?
The debris trail stretching the 300 meters between the two ships looked to be continuous, suggesting that the two wreck sites were part of one larger ship that had split into two parts. Foley's team will be testing the hypothesis over the next few visits to the site, using technology to help them determine the true origins of the second wreck.
As it has every year since since 2012, the team returned to Antikythera this summer to probe the wreck further, examining the area between the two wrecks and using both human divers and robots.
The team is using an autonomous underwater vehicle equipped with stereo cameras. Using an algorithm called SLAM (simultaneous localisation and mapping), the imagery from the stereo cameras can be knitted together to make an extremely precise map of the seafloor. During a few days in June, the robot created 10,500 square meters of map, with a resolution of 2mm. A separate remotely operated vehicle (ROV) carrying metal detecting equipment is also being used to spot hints of bronze or iron-carrying objects lying in the water.
Information from the ROV will be overlaid on top of the data from the 3D map generated by the autonomous underwater vehicle to build up a heat map of where the team should direct their excavation efforts when they return to the site later this summer.
By focusing excavation efforts on areas that show a higher density of metal, the excavations could potentially turn up more fragments of the Antikythera mechanism (only half of the system has been recovered to date). While such a discovery would generate headlines, tiny flecks of lead may have equally fascinating stories to tell.
If any lead artefacts are recovered, the team will take microscopic samples from them and send them away for spectroscopic analysis. By comparing the lead's isotope profile to other samples from around the world, the researchers will be able to hone in on where the ship was built, or where it sailed from.

Potentially, more of the bronze statues recovered on previous trips—hands, feet and other fragments have been found and are on display in the National Museum in Athens—could be identified through the metal heat map.
Finding more of the statues "would be quite a big contribution to art history and culture but we also expect that in amongst the fragments of the statues will be other amazing things. What kind of things? We can't even imagine. The possibilities are boundless. This ship sank carrying the finest material that was available in the entire eastern Mediterranean in the first century BC," Foley said.
Like the mechanism that it carried, the Antikythera is unique for its time period. Its hull planks are some of thickest seen in antiquity, indicating the true size of the ship could be over 200 feet in length, putting it in the same ballpark as HMS Victory, the warship commanded by Admiral Lord Nelson during the Battle of Trafalgar—some 1700 years after the Antikythera sailed.
Why was the Anitkythera so large? The only other known ships of the era that were larger were the pleasure barges that the Roman emperor Caligula used to cruise across Lake Nemi. The Antikythera, however, may have been built for a mix of business and pleasure.
One hypothesis is that the Antikythera may both have carried early tourists and freight, thanks to the huge bronze and marble statues it transported as cargo.
If the ship had to carry statues, some up to three meters tall, they'd have to be packed well to prevent damage in transit. It's been posited that sand or straw could be used as the packing material, but Foley suggests grain could be a more likely candidate: not only would the statues be protected but the grain could be sold on at the Antikythera's destination, making it a far more economical option.
"The ancient grain carriers weren't just cargo ships, they were more like RMS Titanic. They were more like luxury cruise liners," Foley said.
"The couple of extant literary references to grain carriers refer to these floating palaces: mosaic floors, libraries, and amazing cabins, well appointed for the passengers—the 200 or 300 passengers that could be aboard from Rome to Egypt or the Black Sea. They would be sort of the world's first tourists. As the ship was loaded up with grain, which could take a couple of months, they would tour around and then get back on the ship at the end of the season."

Any artefacts, such as mosaic pieces, would lend credence to the theory, but more evidence could come from the bones of passengers that died when the ship sank.
"There's other circumstantial evidence that points to this being the first grain carrier ever discovered, and that's the luxury goods that were carried onboard and also the presence of skeletal remains of a young woman," said Foley.
Remains of four people on the wreck have been found so far, and more may still be on the wreck. Should other bones be recovered, they will be subject to a vigorous recovery procedure to make sure there's no DNA cross-contamination between the dive workers and the bones themselves. All workers on the boat will give cheek swabs to make sure their genetic material can be identified if it ends up on the bones accidentally.
WHOI is now looking for a company that can work with it to analyse the DNA from the bones, perhaps hinting at where those on the ship—be they sailors, high-roller tourists, or slaves—originated from.
The WHOI scientists have already got a handle on other aspects of the travellers' lives, from their hygiene habits to their diets, thanks to the ceramic storage vessels found on the wreck site. The first Antikythera wreck has already yielded amphora, the "55 gallon drum of antiquity", table jugs known as lagynos, and unguentaria—the small bottles that would hold medicines, cosmetics or perfumes.
"With all of these types of ceramic artefacts, they're empty now, but we can take swabs and using police forensic techniques we can pull ancient trace DNA from the ceramic matrix of the original contents, down to the species level," Foley said.
It's not uncommon to find ancient ready meals in some of the jars—mixes of legumes or meats, herbs and spices—but the information from the jars can be far more valuable, giving an indication of what commodities were being traded between what locations, enabling archaeologists to get a better insight into the economy of a region than historical sources alone can provide.
"It's fun for us," said Foley, "because we feel like we've opened up a whole new vista on the past, and we can generate hard data on these early economies. What are they actually importing and exporting, what are they producing, what are they consuming? And it's all right there in these ostensibly empty jars."
Even traces of the ancient grain may still be hidden in the sands around the wrecks for those with the right tech to find it. While the grain is long gone, it will have decomposed to leave characteristic starches and structures called phytoliths, which can be detected with a powerful enough microscope.
WHOI's team returned to the wreck site in the summer of 2015 with their metallic heat maps to begin the process of finding out if the Antikythera has more secrets go give up.

"We're always analysing the data and updating the data, so this year, those wonderfully precise data from the maps produced by the robots, we'll have those on iPads. Those iPads will be in housings and we'll have interactive maps with us as we're diving on the site," Foley said.
The divers move through the water, iPad in hand, looking for the points of interest from the heat maps, and checking their position against those locations as they go. They carry handheld metal detectors too, to spot any metal artefacts buried under the seafloor surface, and are accompanied by professional photographers and videographers, as well as using the iPad cameras to gather snaps too.
"All those data at the end of the day are incorporated into the maps. In the best vision we have of this, we'll have have a data manager incorporating everything we're doing daily," said Foley. "One of the goals will be to virtually excavate and re-excavate the site in the computer afterwards, by using our series of images over the trench we're digging to be able to take it down and refill it in the computer afterwards, so we make sure we're absolutely documenting every action we take."
The divers use rebreathers to allow them to investigate the wrecks at depths that would normally prove fatal to humans in a matter of minutes. By keeping the gases they breath in and out inside a closed loop, adding oxygen where necessary and cleaning out the carbon dioxide, divers are able to spend a far longer time on site than they would be able to with conventional scuba gear.
"Putting humans in the water is always the option of last resort because we have to eat, we have to poop, we get tired and we're really not that efficient underwater. With the rebreather, we increase that efficiency, but it's still we're only want to put people down when there's no other way to do the job," Foley said.
That's why today's underwater excavations will typically rely heavily on robots. They can spend far longer underwater and go to far deeper depths than humans. However, often they're used as observers, with the most difficult work still done by humans.
Last year, WHOI experimented with a fusion of the two: an Iron Man-like exosuit. The exosuit is a small wearable submarine that keeps the diver's air at the same atmospheric pressure as it is in the water.
While the WHOI team didn't use the experimental suit for any work on the wreck site, it was tested out on the vicinity of the Antikythera, and the organisation is now considering whether to plough ahead with a development program.
"You can stay for hours and hours doing work or observing work, and then be winched right back up to the surface," said Foley. "You won't have to pay a decompression penalty. You just jump out of the suit and go have a cup of coffee."
Foley called the oragnisation's August 2015 diving and excavating trip "the most intensive period of activity on the Antikythera ever." The results of the landmark excavation are still being revealed.The HMS ErebusThe 19th century saw the birth of polar exploration, as maritime nations rushed to stake their claim on the unknown winter continents.
In 1845, two ships left Kent bound for the Arctic, tasked with being the first to navigate the Northwest Passage, a hoped-for trade route between Europe and Asia through the Arctic Ocean.
The ships never returned to England.
It's thought the two vessels, HMS Erebus and HMS Terror, were abandoned when they became icebound, leaving the crew to begin a trek on foot across Canada in the hope of finding supplies, or human settlements along the way. The crewmen never made it to safety, and subsequent investigations of remains, found over 100 years later, found traces of starvation, lead poisoning, scurvy, pneumonia, and cannibalism among the party.
The history of the Erebus and Terror has been built up piecemeal since the ships were lost, using testimony from local Inuit, the objects left behind by the crew on their desperate journey, and even notes written by the acting captains following the death of Sir John Franklin, the expedition's captain.
The Inuit reported seeing one of the ships go down off the coast of King William island in around 1850, and they would be the last humans to lay eyes on the vessels for the century and a half that followed.In 2008, Parks Canada, the Canadian Hydrographic Service, and the government of the Arctic territory of Nunavut began a fresh expedition to find the Erebus and the Terror—the latest in a long line of recovery missions that stretches back to Victorian times.
Over the years, the expedition had narrowed down its search to two areas, one in the Victoria Strait, another in Queen Maud Gulf, prompted by testimony from local Inuits who reported going aboard the vessel after its desertion by Franklin's men.
The Parks Canada returned every year, surveying the two areas for traces of the lost vessels. With ice making the areas inaccessible for much of the year, the archaeologists had only a handful of weeks at a time to hunt for the missing ships.
In 2011, the searchers drafted new technology to aid the search: aircraft equipped with lidar symmetry, which could scan the shore areas to a depth of around 20 meters. While the lidar systems weren't expected to be able to pick up signs of a wreck, they could help the team put together better maps of the region, which is still largely uncharted even today. The Canadian Space Agency also joined the project, providing satellite map data from the Radarsat I and II satellites, allowing the team to better delineate the shoreline and the low tide marks.
"Even the maps for the coastline of this area weren't terribly accurate. They were off by about 4km. if you're trying to steer a survey line and not run into an island, 4km is fairly significant," said Ryan Harris, who led the Parks Canada team. With better maps, the team could use side-scan sonar and multibeam echosounding, which can build up a picture of the seafloor, without risk of damage to the environment or to their equipment.
After what Harris describes as "six very long, monotonous years staring at the sonar waterfall display cascading down the screen, often for very, very long hours—sometimes 16 hours a day—bobbing around on the ocean, turning a little bit green as we concentrated on the data all the while," in September last year, an image loomed out of the sonar data.
A shipwreck.
The team knew had almost certainly found one of Franklin's ships. Due to its remoteness, very few ships have sunk in the region, and those that have are generally a matter of public record. Unless a whaling ship had made it up to the Queen Maud Gulf without being noticed, the team were likely to be the first people to see either the Erebus or the Terror in over 160 years.
The team changed its survey grid, aligning it with the axis of the ship, shortening the range of the sonar and boosting its resolution. The telltale details of the ship emerged.
"We could see, for example, a herringbone pattern of diagonally-laid upper decking, which is a laminate construction, sort of a second layer of decking laid over the first. It's absolutely typical of royal navy dockyard modifications for Arctic service," said Harris.
However, without any scuba gear on the survey boat, the first up close look went to a robot, the Saab SeaEye falconer remotely operated vehicle (ROV).
"That's when we saw the two brass six-pounder cannons," said Harris. "They were one of the first things we saw as we crept over the seafloor to the site. Everything was just so picture perfect. You couldn't have scripted it better, almost everything you looked at was just so remarkable."
When the scuba gear arrived, human divers were able to see the site for the first time. A gale had stirred up sediment under the water, but a mix of luck and judgment allowed Harris and his divemate to find enter the water near a timber that Harris could follow "hand over hand" to the wreck proper.
"Out of the gloom on the seafloor loomed this stately shipwreck site, standing bolt upright. It was that phenomenal feeling of making contact with this icon of maritime history," said Harris. "It was absolute exhilaration."
While it's common for wrecks to be found broken and battered, much of the ship—later confirmed as the Erebus—was still intact. The weather deck, upper deck, and quarterdeck were all still identifiable, and although the upper deck had been ruptured by ice, the holes allowed the two Parks Canada divers to peer down into the rooms below. They saw a glass case bottle, a container of spirits reserved for officers, and examined the areas where the ordinary sailors bunked down and the mess table where they would have taken their meals.
The first dive also found the ship's bell, broken free from the belfry but otherwise undamaged, stamped with 1845—the date the two ships had set sail for the Arctic.
While the ice closed over the site and eventually put an end to explorations, the team were able to return to the Erebus in April 2012, carrying a new piece of equipment that would allow them to access the site even in winter.
Defence Research and Development Canada, the military's technology arm, lent the archaeologists a tool that uses a jet of hot water to cut through ice. Using DRDC's 'hot water knife', a two meter section of the ice was removed, allowing the divers to slip beneath the ice and onto the wreck site.
"The advantage of diving in the water is that because of the ice there's no waves," said Harris, "so all of the particulate settles down on the seafloor and you have a really, really good visibility. That's where we're able us to use different technological approaches to document the site that work a lot a better."
As well as documenting the outside of Erebus and its location, the Parks Canada team face the difficulty of navigating within the ship itself, mapping the location of the objects within it and any subtle associations with them.
The team uses stereophotogrammetry for that. Harris said, "It's an extremely important tool for us now. Essentially it uses a whole bunch of still photos, and software is able to determine the three dimensional relationship between subsequent exposures and produces a three dimensional model or a point cloud of what the camera saw, so in just a couple of hours you can aquire a whole bunch of data and produce three dimensional images of the entire wreck site."
The expedition is also experimenting with laser scanning, in partnership with Canadian firm 2G Robotics which makes underwater scanners normally used for detecting damage on oil pipelines. The company developed a longer range scanner for the Franklin expedition, which can map up to a five meter range with millimeter resolution, used to image the outside of the wreck. The expedition also used a smaller machine, with a range of between 50cm and 20cm, for investigating the interior, allowing the team to record the position of small objects, like plates, where they lay within the ship.
The team had another novel piece of technology at its disposal: a 7.5-meter autonomous underwater vehicle, the Arctic Explorer. Unlike the humans that operate it, it can stay underwater for 72 hours, and was packed with all sort of tech: inertial guidance systems and doppler velocity logs to plot the position and speed of the vehicle, as well as an interferometric synthetic aperture sonar (InSAS) system that can record a far wider swathe of radar (630 meters) than the towed side-scan sonar system the survey boat normally uses.
Said Harris, "It can resolve a target the size of your thumb anywhere in that sonar record, because it's using almost like synthetic aperture radar—it's using multiple radars and its synthesising that into one coherent very, very accurate image."
Despite all its technical bells and whistles, the Arctic Explorer had to watch from the sidelines.
"We thought this was going to be the best technology to be used for underwater archaeology because we were hoping that the InSAS system would be able to detect very small, otherwise difficult-to-detect, cultural targets—detached rigging, rope lying on the seafloor, piece of iron plating lying flat on the bottom, any oars, or anything that might be difficult for us to detect with towed sonar," Harris said.
But it was thwarted when the team wanted to take it onto the two search sites last year. In the Victoria Strait search site, there was too much ice to deploy it; in the Queen Maud Gulf, the waters were too shallow for it to be used safely.
The technology may have been some of the best out there, but even it could be bested by Arctic conditions. It's a situation that Franklin and his men would have been familiar with.
Franklin's two ships were some of the first polar vessels to be equipped with steam engines—repurposed railway engines—leaving port with 12 days coal aboard, for example, as well as state-of-the-art Massey double action bilge pumps.
"I've never seen [the Massey pumps] in real life until we were face to face," said Harris. "At the time, that was the very best thing that the Royal Navy could lay their hands on, but the technologies at the time are so short lived because everything was changing so quickly."
The ships that were sent to find Erebus and Terror five years later had already had their bilge pumps upgraded to the newer Daunton model. Harris said, "Things were changing so very quickly in that sort of industrial period that this is like a snapshot of what things were like in 1845."
While Harris and his team continue to gain the Erebus' secrets and discover what other technologies she had onboard, the search will begin afresh for the Terror.
"We'll have five mulitbeam sonar systems pinging away at Victoria Strait trying to locate the second ship. That's important to do," Harris said. "The two ships together are a designated national historic site. To preserve them, protect them, and interpret them for the public, obviously we have know where both of them are. Their stories obviously are intrinsically intertwined, so we hope in the fullness of time to understand what happened to the expedition and find as many clues as possible, and both ships would certainly assist that."The MarsWhen the Mars sank in 1564, it was perhaps the biggest ship in the world—a fearsome vessel with over one hundred guns and 700 men onboard.
The Mars met its end in a bloody sea battle between Sweden, which had built the formidable warship, and the combined armies of Denmark and the German province of Lübeck. During the battle, the Mars caught fire but despite the clear danger, the Mars was still boarded in the last minutes above the waves by enemy forces. The flames ignited the gunpowder stored on the ship causing a huge explosion that blew out the stern of the ship and took her, and the men aboard her—the Swedish sailors and invading forces alike—to the bottom of the ocean.
While the Danish and German soldiers must have known the risks of a ship that was already alight, they still ventured aboard. Why? One suggestion was that they were desperate to recover the thousands of valuable silver and gold coins the ship was said to carry, even if it meant risking—and ultimately losing—their lives.
For over four hundred years, the wreck and its rumoured treasure had slept 75 meters beneath the Baltic Sea. Many attempts had been made to find her since her loss on the first day of the Battle of Öland. The one that was to prove successful, staged by a group of divers known as Ocean Discovery, had been 20 years in the making.
Johan Rönnby, head of the MARIS research institute at Södertörn University set up to study the Mars, said, "Mars is a legendary ship in Sweden, and almost everybody wanted to find it. It was built by the King Erik the XIV, who was son of Gustav Vasa—Vasa is our Tudor dynasty. It's ship a connected to the building of Sweden. Sweden had become a country, and there was an attempt to make Sweden a European superpower and Mars was part of that concept, really. Erik had built maybe the biggest ship in the world in the 1560s, so Mars was a special ship. She was more than 60 meters long and very modern-equipped."
Ocean Discovery's divers had started their search decades ago, upgrading from echosounders to sidescan sonar in 1999. Due to the unreliability of the written sources of the time, the team had been investigating a relatively large area, 15 square miles, and had been hoping to zero in on the Mars using information from local fishermen on where their trawl nets had been caught on the seafloor—a sign that they might have become tangled in the wreck of the Mars.
The conditions in the Baltic Sea—the temperatures and absence of shipworm, which can destroy submerged timber—mean any ships that have sunk in its waters are often well preserved. Over the course of Ocean Discovery's search, the team had found tens of wrecked wooden ships maintained in a good state by the Baltic waters, located using the trawl snag data from the fishermen, but none had been the Mars.
Abandoning the historical data and information from fishermen, the team resorted to doing search passes over the area, dragging the sidescan sonar from east to west.
One day in 2011, the team had been tracking debris from a wreck site for some hours after finding some masts when something out of the ordinary loomed into view on its side scan sonar. "Halfway through, we found a wreck that looked like nothing else," Ingemar Lundgren said. A piece of the ship's hull 40 meters long had appeared, giving the first suggestion that the team had finally stumbled on the flagship of Swedish King Erik the XIV's fleet.
"The first indication [it was the Mars] was the size of the wreck. It was really, really huge on the sea bottom. We could see on the sidescan sonar pictures that this was a big, big wreck. when we saw the first pictures from it, we recognised the ship's [building] techniques...  it was in many ways similar to the Mary Rose. Then we had a good indication it was very likely that it was Mars," Rönnby said. The Mary Rose, a 16th century English warship and the pearl of King Henry VIII's fleet, was sunk ten years before the Mars and salvaged thirty years before it.
A four man team—Richard Lundgren, Fredrik Skogh, Christoffer Modig, Anton Petersson—were on board the ship when it found the Mars, and sent a picture of the scan to Ingemar Lundren, who was processing images from the vessel onshore. "I said it could well be the Mars, because it looked so different."
It took some time to confirm the exact identity of the wreck after its initial discovery, however. "The sidescan sonar is the best technology available, but it's not so detailed that you can see cannon and things," said Lundgren. "It's more technology for locating, not for marine archaeological survey."
Having spotted the wreck, the team sent down an ROV for a closer look. "The camera quality on the ROV is quite poor. We did see the intact hull side but we didn't see any gun ports. We were filming for an hour but we didn't see any cannons. Navigating an ROV on a complex wreck site like that is hard. It's very three dimensional, there's wood sticking up, and the umbilical from the ROV can get tangled. The ROV surveying couldn't prove it was the Mars, it could only prove it was a large warship," Lundgren added.
Absolute confirmation would require human divers. A three-man team, comprised of the two Lundgren brothers and Skogh, went in to investigate.
As they swam over the wreck, gradually distinctive cannons began to appear: first just one, caught in the beam of a single diver's flashlight, then five, six, seven piled up on top of each other.
Still it was not enough to put the wreck's identity beyond doubt: another 16th century Swedish warship, the Svärdet, had sank in the same region as the Mars and not long after. Was it the Svärdet they had found?
Further dives in the weeks following after the discovery of the wreck were used to map the wreck's guns, and found that some bore the Vasa coat of arms. Locating a wrought-iron breech-loaded cannon, however, was enough to put the identity of the ship beyond doubt.
Having found the Mars after a two-decade search, Ocean Discovery found that they weren't the only wreckhunters who were in the region. Using a satellite system called AIS, which allows ships to know the location of nearby ships, Ocean Discovery could see a rival team from underwater survey business Marin Mätteknik (MMT) were also nearby and looking for the warship. In three or four days, they would be on top of the wreck before Ocean Discovery had had time to register the discovery as its own.
"We tried to distract them. We know they could follow us on AIS," Ingemar said, "so we set up a search pattern away from the wreck site and we made it look like we had found something. We stopped in one place and deployed an ROV. They took the bait and came over." Ocean Discovery had won enough time to confirm the identity of the Mars and record it with the authorities.
The former rivals are now friends: Ocean Discovery,  MMT, and the University of Södertörn formed a joint project to investigate the wreck.
Among the techniques used to research the Mars was photogrammetry: divers took hundreds of normal digital still photos of the site from many angles, which is then sewn together by photogrammetry software to create a two-dimensional map of the site.
"In 2012, we made a photo mosaic of the whole site with 600 pictures put together. It's at 70 meters depth, it's totally, totally dark in the Baltic Sea. It's a tricky case to work on. You need to take diving technology and rebreathers and a lot of lamps, of course," said Rönnby.
Thousands more photos have been added since, and divers will carry on adding more, thanks to funding from both National Geographic North European Fund and Waitt Institute.
Using sidescan sonar and multibeam sonar, the project began to build up a high-resolution three-dimensional picture of the wreck too.
Multibeam sonars can be either mounted on the underside of a ship or on an ROV and, by emitting sound waves and recording how long and from what direction they bounce off a surface and return, can build up a 3D picture of the sea floor.
Multibeam sonar, provided by MMT, gives highly accurate georeferencing, so archaeologists known where the wreck is and where each object can be found. The multibeam sonar and photogrammetry are used in concert. If an object located on a 2D image is worthy of further scrutiny, its location can be found using the multibeam, and a diver sent down to precisely the right place.
The project is also working with a BlueView sonar scanner from MMT, which when positioned on the seafloor can gather 60 million measurement points in 15 minutes. Combined with the million photos taken by divers, a map that's precise to two millimeters has been built up—higher resolution than the multibeam. In time, however, the BlueView point cloud will be merged with that from the multibeam sonar so the two technologies can fill in any gaps from each other.
Thanks to the photogrammetry, BlueView, and multibeam sonar imagery of the wreck, the Mars can now be explored in great detail without putting divers tens of meters down in the freezing Baltic.
"A lot of people have said to us, 'Oh, a new Vasa ship, how should we be able to pay for the conservation and everything?' and we said no, we're not going to do that, we're going to salvage as much information as possible from the wreck instead and leave it on the sea bottom. It's what we call the future of maritime archaeology to be able to do that. That has been an important part of the whole technology development to do that," Rönnby said.
Much of the coming archaeology of the Mars will be done on dry land, using a computer, rather than by divers. Due to the resolution of the 3D model—and the relative lack of sediment in the environment—archaeologists will be able to explore the wreck in fine-grained detail. Those that have already been exploring the photo mosaic have been doing so at their desks, looking for artefacts or other elements the divers may have missed.
Those dry-land archaeologists have managed to pinpoint much of the treasure, including thousands of those rumoured silver coins. "On the 3D photo we can zoom in and see the coins laying there all around the sea bottom," said Rönnby. "It looks like a chest exploded with silver coins."
Even the few bones around the wreck can give up their secrets without being moved. A PhD student is studying the 3D mosaic, finding out the physical characteristics of the person—their height, whether they had certain diseases—and even how they died by studying fractures or burns on the bones.
While two silver coins and a couple of cannon have been salvaged and brought to the surface with the help of an ROV, the plan is to leave as much of the wreck in situ, so the excavations won't ultimately affect the wreck in any destructive way. As well as images, the technology gathers the precise position of artefacts and other elements, so no need to pin out grids on the site either.
That's not to say that the Mars won't be seen above water in future, though. The detailed way the way the wreck has been mapped means that it can be brought to the surface in a new way: with 3D printing.
"You can dive on the wreck from the computer, you can zoom into details, you can see artefacts, you can turn them around and then most fantastic thing you can do—you can even print them. You can print parts of the structures or you can print artefacts with 3D printers."
So far, a not-to-scale section of the hull and one of the guns have been printed out. In future, perhaps, museums around the world could take advantage of such techniques. Multiple museums could print out copies of the same object from the wreck, giving them to visitors to touch or academics to study, and not have to worry about how to maintain the right conditions for conservation. The imagery could equally be used to build a 3D visualisation that visitors could manipulate and explore themselves, putting themselves at the heart of history.
In the next few years, would-be marine archaeologists will have yet another way to explore the Mars without getting their feet wet: the hope is to create a virtual reality version of the wreck, that individuals can explore through an Oculus Rift headset.
According to Rönnby, Mars offers "the possibility to come so close to the middle of battle."
"A lot of ships timbers are still black and you can see the explosion," he said. "There are guns still sitting in wood, and cannonballs have penetrated into the hull. You are really close to the battlefield. In the end, that's what I think is the purpose of archaeology is: to study general things about humans, in this case, why we are fighting, how we fight, and how people behave in war situations. I would like to use Mars as part of a general humanistic discussion about warfare and people in war."

Meho Krljic

U Finskoj se već priprema predlog "osnovnog prihoda" za građane, drugim rečima, modela davanja svim građanima određene redovne sume da ne moraju da brinu za egzistenciju. Živimo SF!!!!!




Kela to prepare basic income proposal



QuoteThe Finnish Social Insurance Institution is to begin drawing up plans for a citizens' basic income model. The preparation's director Olli Kangas says that full-fledged basic income would net Finns some 800 euros a month.

The Finnish Social Insurance Institution (Kela) will soon begin work on a presentation for basic income, regional news group Lännen Media reports. Once implemented, the model could revolutionise the Finnish social welfare system.
If implemented, the so-called basic income would replace other benefits people currently receive, and would therefore be rather high, Kela's Research Department Manager Olli Kangas told Lännen Media.
Under basic income all Finnish citizens would be paid an untaxed benefit sum free of charge by the government. Kangas says the model would see Finns being paid some 800 euros a month in its full form, 550 euros monthly in the model's pilot phase.
The basic income model's full-fledged form would make some earnings-based benefits obsolete, but in the partial pilot format benefits would not be affected. The partial model would also retain housing benefits and income support packages.
Kela says it will prepare the basic income proposal by November, 2016. The government's nationwide basic income trial will be based on the finished proposal.



Linkin

Ekipa College Humora očigledno ne smatra da u dovoljnoj meri živimo SF.  :( :cry:

! No longer available

Meho Krljic

Addressing 4 billion people in three words


Quote

If you can't be located, you're nobody. What3Words, a London startup, tackles one of the developing world's most critical challenges: providing a universal address for people who don't have a physical one.  (Part of a series on technology and global development.)

Last week in New York, at the Next Billion conference organized by Quartz, Chris Sheldrick, the CEO of What3Words, captured his audience with strong arguments: 75% of the earth population, i.e. four billion people, "don't exist" because they have no physical address. This cohort of "unaddressed" can't open a bank account, can't deal properly with an hospital or an administration, let alone get a delivery. This is a major impediment to global development.

Governments, the Word Bank and various NGOs have poured millions of dollars to launch addressing programs. A country like Ghana tried four times without success. In Brazil, this portion of Rio de Janeiro with its its sparse network of roads and streets looks like an empty land:

This is one of the world's largest slums in the world, the Rocinha favela: 355 acres (143 hectares) of intertwined sheds hosting 70,000 people. Translated into density, this amounts to a staggering 120,000 persons per square mile (48,000 per km2). Go figure how to deliver a package or simply how to provide the most basic administrative assistance such as monitoring health or education.

The developing world is not the only one to suffer from poor addressing.

Decades or urbanizations are have not necessarily been associated with discipline when it comes to building a reliable address system. This blog, maintained by a British computer scientist named Michael Tandy, compiles an outstanding series of absurd occurrences in global addressing systems. Here is just one example, an address in Tokyo.

〒100-8994 (zip code), 東京都 (Tokyo-to, i.e. Tokyo prefecture or state) 中央区 (Chuo-ku, i.e. Chuo Ward) 八重洲一丁目 (Yaesu 1-chome, i.e. Yaesu district 1st subdistrict) 5番3号 (block 5 lot 3), 東京中央郵便局 (Tokyo Central Post Office).

Messy addressing systems have measurable consequences. UPS, the world's largest parcel delivery provider, calculated that if its trucks merely drove one mile less per day, the company would save $50m a year. In United Kingdom, bad addressing costs the Royal Mail £775m per year.   

One might say latitude and longitude can solve this. Sure thing. Except that GPS coordinates require 16 digits, 2 characters (+/-/N/S/E/W), 2 decimal points, space and comma, to specify a location of the size of a housing block. Not helpful for a densely populated African village, or a Mumbai slum.

In his previous job, Chris Sheldrick (now 33) had his epiphany when organizing large musical events around the world. Tons of material had to be shipped at a specific location and date/time. After several mishaps, he too tried using GPS coordinates to make dozens of flight cases converge at the right time and place. But people got confused with lat/long, sometimes mixing ones and sevens, etc. After a dramatic mistake that almost ruined a large wedding party in the Italian countryside, he vented his frustration to a mathematician friend who then suggested the following: why not replacing GPS coordinates with actual words that anyone can understand and memorize? Sheldrick's mathematician pal came up with a simple idea: a combination of three words, in any language, could specify every 3 meters by 3 meters square in the world. More than enough to designate a hut in Siberia or a building doorway in Tokyo. Altogether, 40,000 words combined in triplets label 57 trillion squares. Thus far, the system has been built in 10 langages: English, Spanish, French, German, Italian, Swahili, Portuguese, Swedish, Turkish and, starting next month, Arabic... All together, this lingua franca requires only 5 megabytes of data, small enough to reside in any smartphone and work offline. Each square has its identity in its own language that is not a translation of another. The dictionaries have been refined to avoid homophones or offensive terms, short terms being reserved for the most populated area. And, unlike the GPS lat/long system, What3Words has an autocorrect feature that proposes the right terms if words are misspelled, or even mispronounced since the system is to be used in a voice-recognition navigation system.

For now, What3Words Ltd. has solid funding. It just completed a $3.4m second round of financing from Intel Capital and Li Ka-Shing's Horizons Ventures, adding to $2m already raised through angels investors. Enough to develop additional languages, make the mapping system accessible by voice and make it embeddable into third party navigation devices —and organize the vast marketing effort required to scale the mapping system.

What3Words' monetization relies on accessing its API (the software layer that connects to its addressing system) and its application Software Development Kit (SDK). Hence two models: non-profit for NGO's or local services such as this one delivering medicine in a South African Township. To many humanitarian organizations, What3Words' features could be invaluable in solving crucial problems such as delivering supplies in uncharted areas (such as a refugee camps), or improving medical aid by identifying every patient's location.

On the for-profit side, What3Words is inevitably catching the attention of a vast array of corporations that struggle with bad addressing. As explained by W3W's marketing director Giles Rhys Jones (a former Ogilvy UK executive), examples range from an oil company doing prospection in a remote region, to a construction company building an infrastructure project. Among the companies that have fully integrated What3Words are Navmii which gathered 24 million users in 90 countries, or the Norwegian mapping provider Kartverket (anyone who drove in empty Norwegian land can understand the benefits), and a Brazilian delivery provider. For What3Words, the decisive boost will come from its integration in major mapping suppliers such as Google Maps or Waze.


дејан

при-крајм је близу, насмешите се, фејсвоч вас гледа а неофејс вас препознаје...а кад се то још интегрише у фејсбук/инстаграм/твитер...

Quote
As you may know, we're big fans of CCTV in the UK. At the last count there was around 6 million CCTV cameras in the UK, or about one for every ten people living here. Most of these cameras are passive: they don't actually do anything, except for constantly recording to a tape or hard drive.
The big exceptions are real-time police and intelligence cameras, such as the the UK's automatic number plate recognition (ANPR) system. Here, in addition to storing the data on hard drives, number plates are actively interrogated and matched against a database of missing vehicles and wanted people.
The UK's police and intelligence agencies probably have similar real-time matching abilities with other private and public CCTV networks, though that information is obviously hard to come by. Most recently, though, the Metropolitan Police asked for access to Transport for London's ANPR network so that it can carry out real-time facial recognition on all motorists entering London.
Which brings us neatly onto today's interesting bit of news. Facewatch is a system that lets retailers, publicans, and restaurateurs easily share private CCTV footage with the police and other Facewatch users. In theory, Facewatch lets you easily report shoplifters to the police, and to share the faces of generally unpleasant clients/drunks/etc with other Facewatch users. The BBC reports that Facewatch is currently used at around 10,000 premises. The Facewatch website is full of positive testimonials from shop owners and police forces alike; it does seem to work as intended.
Now, however, Facewatch has been updated so that it can be integrated with real-time face recognition systems, such as NEC's NeoFace. Where previously a member of staff had to keep an eye out for people on the crowdsourced Facewatch watch list, now the system can automatically tell you if someone on the watch list has just entered the premises. A member of staff can then keep an eye on that person, or ask them politely (or not) to leave.

Pre-crime

In the film Minority Report, people are rounded up by the Precrime police agency before they actually commit the crime. In the movie this pre-crime information is provided by "pre-cognition" savants floating in a goopy nutrient bath who can apparently see the future.
Replace those gibbering pre-cog mutants with Facewatch, and you pretty much have the same thing: a system that automatically tars people with a criminal brush, irrespective of dozens of important variables.
Facewatch lets you share "subjects of interest" with other Facewatch users even if they haven't been convicted. If you look at the shop owner in a funny way, or ask for the service charge to be removed from your bill, you might find yourself added to the "subject of interest" list.
Or what if you have been convicted, but have since come out of the criminal justice system a reformed person? Or what if you were convicted of some completely unrelated crime, but still find yourself stalked by a security guard every time you visit Tesco? Or what if you ended up on the watch list because you walked past McDonald's during a democratic protest against the government or police, and then find yourself ushered out of every shop, restaurant, and pub you visit henceforth?
Pre-crime is potentially an awesome idea with the right safeguards, but to put that kind of power in the hands of private citizens and without significant oversight and code review is quite insane and open to egregious abuse.
The creator of Facewatch, Simon Gordon, told the BBC that the effectiveness of face recognition systems is increasing, while the price of such systems is falling rapidly. "Probably by the end of next year, it will be almost like having a mobile phone," he said. Jolly good.
...barcode never lies
FLA

zakk

Phu, ne znam kolko je realtime prepoznavanje lica ostvarivo i ekonomski dostupno, ako neko zna nek da detalje...
Why shouldn't things be largely absurd, futile, and transitory? They are so, and we are so, and they and we go very well together.

Irena Adler

Ali čak i bez toga, "preventive policing" izgleda postaje...stvar.
https://aeon.co/essays/do-we-really-want-to-use-predictive-policing-to-stop-crime

(i sad sve je to možda ok, samo nigde nema javne debate o bilo čemu a trebalo bi da bude)

Meho Krljic

Da, pre-crime je postao prilično popularan koncept. Recimo:



The age of 'pre-crime' has arrived

Quote
The headline reads, "LA City Council Considers Sending 'Dear John' Letters To Homes Of Men Who Solicit Prostitutes," but what they're considering is quite a bit worse than that.
Los Angeles is considering sending "Dear John" letters to the homes of men who solicit prostitutes hoping the mail will be opened by mothers, girlfriends or wives.
Privacy advocates are slamming the idea. The plan would use automated license plate readers to generate the letters, which would be aimed at shaming "Johns," the Los Angeles Daily News reported.
The city council voted Wednesday to ask the City Attorney's office to examine sending so-called "John Letters," the Daily News reported.
Council member Nury Martinez, who represents a San Fernando Valley district that has a thriving street prostitution problem, introduced the plan.
Martinez has said many of the prostitutes are children, or women being exploited.
In a statement issued by her office Wednesday, Martinez said, "If you aren't soliciting, you have no reason to worry about finding one of these letters in your mailbox. But if you are, these letters will discourage you from returning. Soliciting for sex in our neighborhoods is not OK."
I'm personally of the "what consenting adults do on their own time is none of the government's business" camp. The line about children and exploited women is odd, too. We're now to the point where we're passing laws aimed at potential johns suspected of soliciting prostitutes, simply because they were seen in an area where prostitutes are known to work, all because it's possible that the theoretical prostitutes those suspected johns might have been soliciting are potentially underage or might have been forced in to sex work involuntarily.
But even if you support laws against prostitution, this is a pretty awful way to enforce them. True, no one is going to jail. You're just potentially ruining lives. And as Nick Selby writes at Medium, Martinez is wrong. One needn't have actually solicited a prostitute to get one of these letters.
Have Ms. Martinez and the Los Angeles City Council taken leave of their senses? This scheme makes, literally, a state issue out of legal travel to arbitrary places deemed by some  —  but not by a court, and without due process  —  to be "related" to crime
in general, not to any specific crime.
There isn't "potential" for abuse here, this is a legislated abuse of technology that is already controversial when it's used by police for the purpose of seeking stolen vehicles, tracking down fugitives and solving specific crimes.
It is theoretically possible that a law enforcement officer could observe an area he understands to be known for prostitution, and, upon seeing a vehicle driving in a certain manner, or stopping in front of suspected or known prostitutes, based on his reasonable suspicion that he bases on his analysis of the totality of these specific circumstances, the officer could speak with the driver to investigate. This is very uncommon, because it would take a huge amount of manpower and time.
The City Council and Ms. Martinez seek to "automate" this process of reasonable suspicion (reducing it to mere presence at a certain place), and deploy it on a massive scale. They then seek to take this much further, through a highly irresponsible (and probably illegal) action that could have significant consequences on the recipient of such a letter  —  and they have absolutely no legal standing to write, let alone send it. There are grave issues of freedom of transportation and freedom of association here.
Guilt by association would be a higher standard.

Worse, they seek to use municipal funds to take action against those guilty of nothing other than traveling legally on city streets, then access the state-funded Department of Motor Vehicle registration records to resolve the owner data, then use municipal moneys to write, package and pay the United States Postal Service to deliver a letter that is at best a physical manifestation of the worst kind of Digital McCarthyism.

I'd add one more awful consequence to this policy: It's essentially stating that there are some neighborhoods where a person's mere presence is indicative of criminal activity — that the only reason one would visit these areas is to solicit sex for money. Think about what that says to the people who live and work in those areas. It's also a pretty surefire way to prevent these neighborhoods from ever improving. Why would anyone travel to or through an area designated a "prostitution zone" to, say, offer job training, counseling, medical care or other services if doing so means their name winds up in a database of suspected johns?

The L.A. City Council is still considering the policy. But as the CBS Los Angeles article linked above points out, cities such as Minneapolis, Des Moines and Oakland, Calif., are already sending letters. Just wait until they start trying to take these people's cars, too.

дејан

Quote from: zakk on 18-12-2015, 11:23:55
Phu, ne znam kolko je realtime prepoznavanje lica ostvarivo i ekonomski dostupno, ako neko zna nek da detalje...
ево ти из чланка неколико линкова везано за фејсвоч и неофејс технологију
као и директни линкови ка 'производима' где се мање више објашњава коришћена технологија


Quote
http://arstechnica.co.uk/tech-policy/2015/07/uk-police-to-get-real-time-access-to-photos-of-drivers-entering-london/
http://www.bbc.com/news/technology-35111363
https://www.facewatch.co.uk/cms
http://www.nec.com/en/global/solutions/safety/face_recognition/NeoFaceWatch.html


но, ја не видим да је потребно много било чега (новца и времена) за препознавање физиономија у реалном времену...нисам баш сигуран шта те ту буни
...barcode never lies
FLA

mac

Da nema Boba Živkovića ne bih ni znao, ali prvi stepen rakete Falcon 9 se vratio i uspešno prizemljio pošto je otposlao drugi stepen sa satelitima u orbitu. Živimo u interesantnim vremenima.

45 minuta originalnog prenosa
https://www.youtube.com/watch?v=O5bTbVbe4e4

Poletanje je u 22:45, sletanje u 31:25 (sa malo uvodne priče da bude zanimljivije).

Glavnih par minuta odvojeno
https://www.youtube.com/watch?v=ZqcPqJfuzG4


Meho Krljic

Živimo SF, u ovom slučaju 1984. Džordža Orvela.

  Samsung Warns Customers To Think Twice About What They Say Near Smart TVs

QuoteIn a troubling new development in the domestic consumer surveillance debate, an investigation into Samsung Smart TVs has revealed that user voice commands are recorded, stored, and transmitted to a third party. The company even warns customers not to discuss personal or sensitive information within earshot of the device.
This is in stark contrast to previous claims by tech manufacturers, like Playstation, who vehemently deny their devices record personal information, despite evidence to the contrary, including news that hackers can gain access to unencrypted streams of credit card information.
The new Samsung controversy stems from the discovery of a single haunting statement in the company's "privacy policy," which states:
"Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party."
This sparked a back and forth between the Daily Beast and Samsung regarding not only consumer privacy but also security concerns. If our conversations are "captured and transmitted," eavesdropping hackers may be able to use our "personal or other sensitive information" for identity theft or any number of nefarious purposes.
There is also the concern that such information could be turned over to law enforcement or government agencies. With the revelation of the PRISM program — by which the NSA collected data from Microsoft, Google, and Facebook — and other such NSA spying programs, neither the government nor the private sector has the benefit of the doubt in claiming tech companies are not conscripted into divulging sensitive consumer info under the auspices of national security.
Michael Price, counsel in the Liberty and National Security Program at the Brennan Center for Justice at the NYU School of Law, stated:
"I do not doubt that this data is important to providing customized content and convenience, but it is also incredibly personal, constitutionally protected information that should not be for sale to advertisers and should require a warrant for law enforcement to access."
Responding to the controversy, Samsung updated its privacy policy, named its third party partner, and issued the following statement:
"Voice recognition, which allows the user to control the TV using voice commands, is a Samsung Smart TV feature, which can be activated or deactivated by the user. The TV owner can also disconnect the TV from the Wi-Fi network."
Under still more pressure, Samsung named its third party affiliate, Nuance Communications. In a statement to Anti-Media, Nuance said:
"Samsung is a Nuance customer. The data that Nuance collects is speech data. Nuance respects the privacy of its users in its use of speech data. Our use of such data is for the development and improvement of our voice recognition and natural language understanding technologies. As outlined in our privacy policy, third parties work under contract with Nuance, pursuant to confidentiality agreements, to help Nuance tailor and deliver the speech recognition and natural language service, and to help Nuance develop, tune, enhance, and improve its products and services.
"We do not sell that speech data for marketing or advertising. Nuance does not have a relationship with government agencies to turn over consumer data.....There is no intention to trace these samples to specific people or users."
Nuance's Wikipedia page mentions that the company maintains a small division for government and military system development, but that is not confirmed at this time.
Despite protestations from these companies that our voice command data is not being traced to specific users or, worse, stored for use by government or law enforcement agencies, it seems that when it comes to constitutional civil liberties, the end zone keeps getting pushed further and further down the field.
For years, technologists and smart device enthusiasts claimed webcam and voice recording devices did not store our information. While Samsung may be telling the truth about the use of that data, there are countless companies integrating smart technology who may not be using proper encryption methods and may have varying contractual obligations to government or law enforcement.
Is it really safe for us to assume that the now exceedingly evident symbiotic relationship between multinational corporations and government agencies does not still include a revolving door for the sharing of sensitive consumer data?



Meho Krljic

Kakvi horor, sve što mi pomaže da Mad Maxa gledam ponovo po prvi put je blagoslov!!!!!!!!!!!!

Biki_respawned

Pa dobro, kome se jos nije desilo da pozeli da nesto zaboravi, narocito losa iskustva, koja kako naucnici kazu ostaju bolje, detaljnije urezana u nasa secanja.

"Negative events may edge out positive ones in our memories, according to research by Kensinger and others.It really does matter whether [an event is] positive or negative in that most of the time, if not all of the time, negative events tend to be remembered in a more accurate fashion than positive events," Kensinger said."

Kad smo vec kod secanja eto podsetila me ova prica na EKV
"Hocu da zaboravim,
Ovaj dan i prosli dan ..."

http://youtu.be/KIQ1sa1CkqY




Meho Krljic

Do sada smo kvantno teleportovanje informacije prihvatali malko stisnutih zuba i gunđali da je to u redu jer to nije prava fizika i ko zna šta je stvarno ta kvantna upletenost i možda nas ti naučnici uostalom lažu, pa kako bismo uopšte mogli da proverimo itd. Ali eto vraga, sad su u Nemačkoj teleportovali informaciju bez silaska na kvantni nivo, dakle u okvirima "normalne" njutnovske fizike:




German scientists successfully teleport classical information


QuoteJENA, Germany, March 4 (UPI) -- Using a series of laser beams, a pair of German scientists successfully teleported classical information without the transfer or matter or energy.
Researchers have previously demonstrated local teleportation within the world of quantum particles. But the latest experiment successfully translates the phenomenon for classical physics.
"Elementary particles such as electrons and light particles exist per se in a spatially delocalized state," Alexander Szameit, a professor at the University of Jena, explained in a press release.
In other words, these particles can be in two places at the same time.
"Within such a system spread across multiple locations, it is possible to transmit information from one location to another without any loss of time," Szameit said.
By coupling the properties of classical information, researchers were able to use quantum teleportation for classical teleportation. Classical information is coupled using a process called "entanglement."
"As can be done with the physical states of elementary particles, the properties of light beams can also be entangled," said researcher Marco Ornigotti. "You link the information you would like to transmit to a particular property of the light."
Researchers used polarization to encode information within a laser beam, enabling the teleportation of information instantly and in its entirety without loss of time.
Whereas quantum information and quantum systems describe particle properties that are inferred, classical information describes physical properties directly measured.
The first-of-its-kind demonstration was detailed this week in the journal Laser & Photonics Reviews.

PTY


Father Jape

Blijedi čovjek na tragu pervertita.
To je ta nezadrživa napaljenost mladosti.
Dušman u odsustvu Dušmana.

Meho Krljic



Meho Krljic

Elon Musk nije samo sneno meditirao kad je pre par godina pričao o hyperloopu:



  Hyperloop One technology tested successfully in Nevada desert 

Quote
Hyperloop One, a Los Angeles company working to develop futuristic transportation technology, conducted a successful test of its high speed transportation technology Wednesday in the desert outside Las Vegas.
The seconds-long, outdoor demonstration by Hyperloop One featured what appeared to be a blip of metal gliding across a small track before disappearing into a cloud against the desert landscape.
This high speed train can speed up to 1,200 km/h. Read more about the Hyperloop One:
https://t.co/j2tst2QoM2https://t.co/ou71CQMjUP
@CBCNews
A fully operational hyperloop would whisk passengers and cargo in pods through a low pressure tube at speeds of up to 1,207 km per hour (750 miles per hour). That could make it possible to travel from Montreal to Toronto in half an hour or Toronto to Vancouver in just three.
Maglev technology would levitate the pods to reduce friction in the city-to-city system, which would be fully autonomous and electric powered.
Brogan Bambrogan, a former engineer with Elon Musk's SpaceX company, said he was happy with the results of the test.
"That's what it was supposed to do. So we always like it when engineering tests go that way," said Bambrogan, Hyperloop One's co-founder and chief technology officer. "Technology development testing can be a tricky beast, so you never know on a given day if things are going to work exactly like you want."
A day earlier, the company had announced the closing of $80 million in financing and said it plans to conduct a full system test before the end of the year. It also announced that it was changing its name from Hyperloop Technologies to Hyperloop One.

Hyperloop One builds off a design by Tesla and SpaceX CEO Elon Musk, who has suggested it would be cheaper, faster and more efficient than high speed rail projects, including the one currently being built in California.
Speaking on the eve of the first demonstration test of the propulsion in the Las Vegas desert, Hyperloop One CEO Rob Lloyd tried to dispel criticism that the technology is unproven and better suited for science fiction than practical use.
"It's real, it's happening now, and we're going to demonstrate how this company is making it happen," he said at a press conference.
He likened hyperloop technology to the emergence of the U.S. railroad system and the era of prosperity it ushered in.
Policy problems  The idea has skeptics, including professor James Moore II, director of the University of Southern California's Transportation Engineering Program.
He credited Musk for the new idea on how to move objects through tubes, but said backers would face myriad public policy issues before it's installed on a large scale, including questions about safety, financing and land ownership.
Such roadblocks are keeping self-driving vehicles off the road decades after the idea was born, he said.
"I would certainly not say nothing will come of hyperloop technology," Moore said. "But I doubt this specific piece of technology will have a dramatic effect on how we move people and goods in the near term."
Competition for location Lloyd also announced a competition to determine where the first Hyperloop One system should be built, with an announcement expected next year.
Early applications could centre around ports — possibly replacing the trucks and trains that carry cargo from ships to
factories and stores.
New investors include 137 Ventures, Khosla Ventures, Fast Digital, Western Technology Investment (WTI), SNCF, the French National Rail Company, a force behind high speed rail in Europe, and GE Ventures.

BamBrogan said the company's engineering team is focused on finding efficiencies to reduce the cost of building a hyperloop.
"We want to deliver all the value that hyperloop can deliver — the safe, the efficient, the on demand, the fast. But, we want to deliver it at a cost basis that is absolutely transformative," he said.
Hyperloop One has competition in the space, including Hyperloop Transportation Technologies, a crowdsourced company that last month signed an agreement with the Slovakian government to build a hyperloop connecting Slovakia with Austria and Hungary.


Meho Krljic

Face recognition app taking Russia by storm may bring end to public anonymity



Quote
FindFace compares photos to profile pictures on social network Vkontakte and works out identities with 70% reliability


If the founders of a new face recognition app get their way, anonymity in public could soon be a thing of the past. FindFace, launched two months ago and currently taking Russia by storm, allows users to photograph people in a crowd and work out their identities, with 70% reliability.
It works by comparing photographs to profile pictures on Vkontakte, a social network popular in Russia and the former Soviet Union, with more than 200 million accounts. In future, the designers imagine a world where people walking past you on the street could find your social network profile by sneaking a photograph of you, and shops, advertisers and the police could pick your face out of crowds and track you down via social networks.

In the short time since the launch, Findface has amassed 500,000 users and processed nearly 3m searches, according to its founders, 26-year-old Artem Kukharenko, and 29-year-old Alexander Kabakov.
Kukharenko is a lanky, quietly spoken computer nerd who has come up with the algorithm that makes FindFace such an impressive piece of technology, while Kabakov is the garrulous money and marketing man, who does all of the talking when the pair meet the Guardian.
Unlike other face recognition technology, their algorithm allows quick searches in big data sets. "Three million searches in a database of nearly 1bn photographs: that's hundreds of trillions of comparisons, and all on four normal servers. With this algorithm, you can search through a billion photographs in less than a second from a normal computer," said Kabakov, during an interview at the company's modest central Moscow office. The app will give you the most likely match to the face that is uploaded, as well as 10 people it thinks look similar.
Kabakov says the app could revolutionise dating: "If you see someone you like, you can photograph them, find their identity, and then send them a friend request." The interaction doesn't always have to involve the rather creepy opening gambit of clandestine street photography, he added: "It also looks for similar people. So you could just upload a photo of a movie star you like, or your ex, and then find 10 girls who look similar to her and send them messages."
Some have sounded the alarm about the potentially disturbing implications. Already the app has been used by a St Petersburg photographer to snap and identify people on the city's metro, as well as by online vigilantes to uncover the social media profiles of female porn actors and harass them.


The technology can work with any photographic database, though it currently cannot use Facebook, because even the public photographs are stored in a way that is harder to access than Vkontakte, the app's creators say.
But the FindFace app is really just a shop window for the technology, the founders said. There is a paid function for those who want to make more than 30 searches a month, but this is more to regulate the servers from overload rather than to make money. They believe the real money-spinner from their face-recognition technology will come from law enforcement and retail.
Kukharenko and Kabakov have recently returned from the US, and Kabakov was due to travel to Macau and present the technology to a casino chain. The pair claim they have been contacted by police in Russian regions, who told them they started loading suspect or witness photographs into FindFace and came up with results. "It's nuts: there were cases that had seen no movement for years, and now they are being solved," said Kabakov.
The startup is in the final stages of signing a contract with Moscow city government to work with the city's network of 150,000 CCTV cameras. If a crime is committed, the mugshots of anyone in the area can be fed into the system and matched with photographs of wanted lists, court records, and even social networks.
It does not take a wild imagination to come up with sinister applications in this field too; for example authoritarian regimes able to tag and identify participants in street protests. Kabakov and Kukharenko said they had not received an approach from Russia's FSB security service, but "if the FSB were to get in touch, of course we'd listen to any offers they had".
The pair also have big plans for the retail sector. Kabakov imagines a world where cameras fix you looking at, say, a stereo in a shop, the retailer finds your identity, and then targets you with marketing for stereos in the subsequent days.
Again, it sounds a little disturbing. But Kabakov said, as a philosophy graduate, he believes we cannot stop technological progress so must work with it and make sure it stays open and transparent.
"In today's world we are surrounded by gadgets. Our phones, televisions, fridges, everything around us is sending real-time information about us. Already we have full data on people's movements, their interests and so on. A person should understand that in the modern world he is under the spotlight of technology. You just have to live with that."

Meho Krljic

Ko je ikada pokušao da upotrebi JuTjubove automatske titlove sigurno će se grohotom nasmejati na ovo:



  Groundbreaking gadget claims to fit in your ear and translate foreign languages in real-time  

Meho Krljic



mac

I ne treba da se kapira. AI je sastavio neke rečenice koje je povadio iz drugih scenarija. Ljudima se svidelo, iako je besmisleno, pa su napravili filmić od toga.

Meho Krljic

Kolumnist Vošington posta tvrdi da je ubrzani plejbek video materijala ključ za budućnost:



I have found a new way to watch TV, and it changes everything


QuoteGame of Thrones. The Bachelor. House of Cards. It's now possible to watch everything. How? It's the future of storytelling.



IHAVE a habit that horrifies most people. I watch television and films in fast forward. This has become increasingly easy to do with computers (I'll show you how) and the time savings are enormous. Four episodes of "Unbreakable Kimmy Schmidt" fit into an hour. An entire season of "Game of Thrones" goes down on the bus ride from D.C. to New York.
I started doing this years ago to make my life more efficient. Between trendy Web shows, auteur cable series, and BBC imports, there's more to watch than ever before. Some TV execs worry that the industry is outpacing its audience. A record-setting 412 scripted series ran in 2015, nearly double the number in 2009.
"There is simply too much television," FX Networks CEO John Landgraf said last year. Nonsense, responded Netflix content chief Ted Sarandos, who has been commissioning shows at a startling rate. "There's no such thing as too much TV," he said.
So here we are, spending three hours a day on average, scrambling to keep up with the Kardashians, the Starks, the Underwoods, and the dozens of others on the roster of must-watch TV, which has exploded in the age of fragmented audiences. Nowadays, to stay on the same wavelength with your different groups of friends — the ones hating on "Meat Chad" and the ones cooing over Khaleesi — you have to watch in bulk.


This is where the trick of playing videos at 1.5x to 2x comes in — the latest twist in the millennia-old tradition of technology changing storytelling. The concept should be familiar to many. For years, podcast and audiobook players have provided speedup options, and research shows that most people prefer listening to accelerated speech.
In recent years, software has made it much easier to perform the same operation on videos. This was impossible for home viewers in the age of VHS. But computers can now easily speed up any video you throw at them. You can play DVDs and iTunes purchases at whatever tempo you like. YouTube allows you select a speedup factor on its player. And a Google engineer has written a popular Chrome extension that accelerates most other Web videos, including on Netflix, Vimeo and Amazon Prime.
Over 100,000 people have downloaded that plug-in, and the reviews are ecstatic. "Oh my God! I regret all the wasted time I've lived before finding this gem!!" one user wrote.
But speeding up video is more than an efficiency hack. I quickly discovered that acceleration makes viewing more pleasurable. "Modern Family" played at twice the speed is far funnier — the jokes come faster and they seem to hit harder. I get less frustrated at shows that want to waste my time with filler plots or gratuitous violence. The faster pace makes it easier to appreciate the flow of the plot and the structure of the scenes.
  Wonkblog writer Jeff Guo watches all his television shows at 160% speed. This clip from ABC's Modern Family will give you an idea of what that looks like. (ABC)  As I've come to consume all my television on my computer, I've developed other habits, too. I don't watch linearly anymore; I often scrub back and forth to savor complex scenes or to skim over slow ones. In other words, I watch television like I read a book. I jump around. I re-read. Sometimes I speed up. Sometimes I slow down.
I confess these new viewing techniques have done something strange to my sense of reality. I can't watch television in real-time anymore. Movie theaters feel suffocating. I need to be able to fast-forward and rewind and accelerate and slow down, to be able to parcel my attention where it's needed. The most common objection I hear is that this ruins the cinematic experience. Annette Insdorf, a film professor at Columbia, told me: "Sometimes watching a movie is like lovemaking: Isn't a sustained seduction more gratifying than momentary thrills?"
But the more I've learned about the history and the science of media consumption, the more I've come to believe this is the future of how we will appreciate television and movies. We will interrogate videos in new ways using our powers of time manipulation. Maybe not everyone will watch on fast-forward like I do, but we will all be watching on our own terms.
In a way, what's happening to video recalls what happened to literature when we stopped reading aloud, together, and started reading silently, alone. Beginning in the Middle Ages, people no longer had to gather in groups to hear tales or learn the news or study religion. They could be alone with a text and their own thoughts, an unprecedented freedom that led to political and religious turmoil and forever changed intellectual life.
With computers, video consumption is also becoming a solitary, self-paced act — and maybe a more analytical act, as well. If you believe, as I do, in the artistic potential of television and film, then perhaps we are on the brink of another cultural transformation — viewers finally seizing control of this medium. And the medium will be better for it.
FOR a very long time, life was limited by the rate at which we spoke. Although we have had writing systems for millennia, early texts were designed to be read aloud, meaning that literature unfolded at the pace of human speech.
Many ancient Greek and Roman documents, for instance, lacked punctuation, spaces or lowercase letters, making it challenging for people to understand them without sounding out the words syllable by syllable. "A written text was essentially a transcription which, like modern musical notation, became an intelligible message only when it was performed orally to others or to oneself," historian Paul Saenger writes.
There are physical limits to how quickly we can form sounds, as anyone who has attempted a tongue-twister can attest. Mouths need time to move into position for the next vowel or consonant. A good estimate for the natural rate of speech in English is 200 to 300 syllables per minute, which translates into 150 to 200 words per minute.
According to Audible, the audiobook company, the typical book recording is performed at 155 wpm. A 1990 study found that radio broadcasts run at 160 wpm on average, while everyday conversations, which use shorter words, occur at about 210 wpm.


For much of human history, this was the sound barrier for communicating ideas.
It's not that silent reading was impossible in antiquity. It was just very difficult. There exist tales of scholars who seemed to absorb books silently; in the fourth century, Saint Augustine told of an odd monk who read without forming the words with his mouth. "When he read," Augustine wrote, "his eyes scanned the page and his heart sought out the meaning, but his voice was silent and his tongue was still."
Historians debate whether these silent readers were regarded as freaks or the practice was merely unusual. Reading was still a group activity in the fifth and sixth centuries. One person read aloud while others listened. Even for scribes who copied manuscripts in solitude, the act of reading was intertwined with the act of speaking. Many early medieval monks who had taken vows of silence were still allowed to mumble as they read, Saenger writes, because mumbling was considered part of the reading process.
During the Middle Ages, scribes began introducing spacing and punctuation into texts, which made silent reading much easier for everyone. The practice began in monasteries around the 10th century and slowly spread to university libraries a few hundred years later, and finally to the European aristocracy by the 14th and 15th centuries, according to historian Roger Chartier.
The technique of silent, solitary reading released people from the sluggishness of the spoken word — as well as from the judgment of their peers. Reading in private gave people room to engage with a text, the freedom to think critically and sometimes heretically. Opinions too controversial for group reading could be disseminated and consumed in private. The result, historians say, was an intellectual, scientific — and spiritual — blossoming in Europe.
"Silent, secret, private reading paved the way for previously unthinkable audacities," Chartier writes. "In the late Middle Ages, even before the invention of the printing press, heretical texts circulated in manuscript form, critical ideas were expressed, and erotic books, suitably illuminated, enjoyed considerable success."
Chartier called silent reading the "other revolution" — together with the printing press and mass literacy, these developments created both the demand and the supply for a vast quantity of writing. The faster pace of silent reading accelerated the spread of new ideas and vaulted Western society toward religious and political schism.
"This 'privatization' of reading is undeniably one of the major cultural developments of the early modern era," Chartier argued.
WHAT silent reading also revealed was that the rate of human thought far outstrips the rate of human speech.
Broadcasters speak at about 160 wpm, but college students can comfortably devour a text at 300 wpm, which also seems to be the most efficient speed for reading comprehension, on average.
Some people, of course, read slower, and others read much, much faster. The beauty of text is that we absorb it at our own pace. Not so for audiovisual recordings, at least not for much of the 20th century. If you play back a tape or a phonograph record too quickly, the voices turn squeaky and unintelligible. Recordings remained difficult to skim until the 1950s, when researchers made a set of discoveries about human speech.
It turns out that sounds of the spoken word are vastly redundant. Vowels and consonants drag on longer than necessary for us to understand them. In the late 1940s, Harvard researchers discovered they could cover up more than half of a speech recording without damaging a listener's comprehension. The trick was to rapidly mute and unmute the audio. These silent gaps were brief enough that people's minds could fill them in easily. Words sounded choppy, but they remained perfectly intelligible.
"It is much like seeing a landscape through a picket fence," the researchers wrote. "The pickets interrupt the view at regular intervals, but the landscape is perceived as continuing behind the pickets."


The Harvard researchers rapidly muted and unmuted audio recordings. The top picture shows the original sound wave. The bottom picture shows the resulting sound wave, which has silent bits. Even though half of the sound wave has been destroyed, the researchers found that people could still understand it. (Miller and Licklider, 1950.)
A team of engineers at the University of Illinois soon had another idea: Instead of leaving the gaps in, why not cut them out and stitch the remaining slivers of audio together? For instance, deleting every other millisecond of audio would cause the recording to play in half the time. This new way of speeding up sound, which became known as the sampling method, had the benefit of not making people sound like chipmunks.
In the 1960s, a blind psychologist named Emerson Foulke began experimenting with this technique to accelerate speech. A professor at the University of Louisville, Foulke was frustrated with the slowness of recorded books for the blind, so he tried speeding them up. The sampling method proved surprisingly effective. In Foulke's experiments, speech could be accelerated to 250-275 wpm without affecting people's scores on a listening comprehension test.


These limits were suspiciously close to the average college reading rate. Foulke suspected that beyond 300 wpm, deeper processes in the brain were getting overloaded. Experiments showed that at 300-400 wpm, individual words were still clear enough to understand; except at that rate, many listeners couldn't keep up with rapid stream of words, likely because their short-term memories were overtaxed.
Some, of course, fared better than others. Just as people naturally read at different rates, subjects varied in how well they could understand accelerated speech. Further studies found a connection to cognitive ability. Those with higher intelligence, as well as faster readers, were more adept at understanding sped-up recordings. (The NSA once considered using tests involving accelerated speech to screen for people who could become morse code operators.)
The most startling discovery, though, was that people actually enjoy listening to accelerated audio. Foulke and his colleagues noticed that college students preferred recordings that had been sped up by 30 percent, from 175 wpm to 222 wpm. More recent studies find that, given the choice, people will increase playback rate by about 40 to 50 percent on average — a 1.4 to 1.5x speedup.
This tendency extends to video as well, as experiments with video lectures and even Discovery Channel shows have shown. Increasing the tempo of a recording seems to stave off boredom and help people stay engaged. "With the slower pace, my attention span actually wavered, and I focused on too much detail," one subject told researchers at Microsoft.
Sometimes, people don't even notice that they are watching on fast-forward. Cable companies will slightly speed up shows to make room for more ads, but the difference can be hard to detect — in part because the brain adapts to the higher speeds.
In the 1970s and 1980s, the Defense Department began investigating compressed speech as a way to boost learning. Military-funded experiments showed that people can be trained to better understand accelerated recordings. Just a few weeks of regular exposure seemed to alter how people perceived and processed language, causing them to prefer faster and faster listening rates.
Some of those changes happen within minutes. An experiment in 1997 found that listening to just five sentences of accelerated speech boosted subsequent comprehension rates by 15 percent. This process may be related to how our brains adjust to unfamiliar accents. Have you ever noticed that it becomes easier and easier to talk to someone with a foreign accent? It's not them. It's your brain making short-term adaptations.
Our brains also make long-term adaptations to accelerated speech. Continued training increases people's accuracy rates and their comfort with sped-up recordings. Functional MRI scans show changes in how their brains respond to speech. Anecdotally, many subjects found that repeated exposure to accelerated speech caused speech at regular speeds to sound strange.
This seems to have happened to me as well. After watching accelerated video on my computer for a few months, live television began to seem excruciatingly slow. Ilya Grigorik, the Google engineer who invented the Chrome extension, had a similar experience. He regularly watches videos at double speed, adjusting the pace up or down depending on how complex the ideas are.
"Whenever I describe it to people, I get a very weird look," he said. "Then I actually convince them to try it. It's uncomfortable for them at first, but once they get into it, they really get into it."
WE all chart our own paths through a text. I rarely read a book straight through from start to finish. I take detours, I backtrack, and I always scan the plot summary on Wikipedia to learn what's coming next. Psychologists at the University of California, San Diego have found that people enjoy a story more if the ending has already been spoiled. Suspense, it seems, is overrated.
The Russian-American novelist Vladimir Nabokov believed that re-reading was the only way to truly enjoy a novel. Not until the second or third go-around can we perceive a novel's grand schemes and secrets. Of the initial encounter, he once said: "When we read a book for the first time, the very process of laboriously moving our eyes from left to right, line after line, page after page, this complicated physical work upon the book, the very process of learning in terms of space and time what the book is about, this stands between us and artistic appreciation."
There's no one right way to enjoy a book. Literary theorist Roland Barthes encouraged us not to treat novels so literally or linearly, but to traipse around in search of our own meanings. Why, then, do we still watch television straightforwardly? Why do we relinquish ourselves to the pace set by a film's director? Can't we find more interesting ways to be a couch potato?
For a long time, the answer was that the technology did not allow it. But with the rise of computer viewing, everyone can take charge of how they travel through a video. I often consume reality programs at double speed or faster because the idioms of these shows are so familiar. For me, watching "The Bachelorette" is like shucking a crab. I know where the juicy bits are, and I know which parts are inedible.
Accelerated speeds make it easier to perceive the structure of a story; slower speeds allow me to savor the details of the filmmaking. These alternative styles of viewing are no less illuminating. At double speed, the Red Wedding scene from "Game of Thrones" crosses the threshold from high drama to high farce. You start to see how the directors strained to create a moment of maximum trauma, how the death scenes are overacted, how the massacre operates like the mechanical gnashings of a meat-processing plant.
  Wonkblog writer Jeff Guo watches all his television shows at 160% speed. This clip from HBO's Game of Thrones will give you an idea of what that looks like. (HBO)  I recently described my viewing habits to Mary Sweeney, the editor on the cerebral cult classic "Mulholland Drive." She laughed in horror. "Everything you just said is just anathema to a film editor," she said. "If you don't have respect for how something was edited, then try editing some time! It's very hard."
Sweeney, who is also a professor at the University of Southern California, believes in the privilege of the auteur. She told me a story about how they removed all the chapter breaks from the DVD version of Mulholland Drive to preserve the director's vision. "The film, which took two years to make, was meant to be experienced from beginning to end as one piece," she said.
I disagree. Mulholland Drive is one of my favorite films, but it's intentionally dreamlike and incomprehensible at times. The DVD version even included clues from director David Lynch to help people baffled by the plot. I advise first-time viewers to watch with a remote in hand to ward off disorientation. Liberal use of the fast-forward and rewind buttons allows people to draw connections between different sections of the film.
I found something of a sympathetic ear in Peter Markham, who teaches directing at the American Film Institute Conservatory. "This notion of privacy, of watching privately and forming your own cathedral of narrative — that's interesting," he said. "But that, I think, is mostly an intellectual or cerebral experience. The thing about dramatic narrative is that it creates an emotional, visceral, subconscious experience. That stuff has its own rhythm, its own insistence."
Markham argued that film is more than a stream of dialogue or a sequence of events. The timing of the images imprints on our brains in a special way. "If you speed up Hitchcock, if you speed up "Rear Window," you won't get the same experience," he said. "It's like trying to speed up a Beyoncé track. It's already at the perfect speed."
IT didn't occur to me until later, of course, that people do mess with Beyoncé all the time. DJs chop her up, stretch her over new beats, snatch bits of her vocals to craft new songs. Today's cinema fans also engage in forms of creative remixing. They assemble montages of their favorite characters. They create entirely original shows by re-editing scenes from ones.  The actor Topher Grace, for instance, famously made his own unauthorized version of the "Star Wars" prequel movies called "Episode III.5: The Editor Strikes Back." Those who have seen it say it's a masterful recombobulation of those three flawed films.
Henry Jenkins, a media theorist at the University of Southern California, reminded me that creative repurposing has been happening in fan communities for decades. Throughout his career, Jenkins has studied the rise of "participatory culture" — ways in which fans take control of favorite stories through fanzines, fan art, fan fiction, and more recently, fan videos. "Fans reject the idea of a definitive version produced, authorized, and regulated by some media conglomerate," Jenkins wrote over a decade ago. "Instead, fans envision a world where all of us can participate in the creation and circulation of central cultural myths."
Perhaps fan culture offers the most optimistic vision for the future of media consumption. The power of the auteur is diminishing, but our appreciation for the art form is increasing. More and more, we will watch TV on our computers, on own terms, creating our own meanings and deriving our own, private pleasures.
"I think your experience is very similar to my own," Jenkins said. "I do treat television more and more like a book. I totally get that analogy, and it's a good way to think about the degree of control we now have over what we watch — which has been building up over time, with VCRs, then DVRs, and now streaming and digital distribution. We're learning to think about television in a different way."
"I'm fully convinced that everything is better in a box set," he added.


Netflix, which is essentially the motherlode of box sets, has made this kind of careful viewing much easier. That's one reason that serialized shows have become so popular in recent years. Since audiences can easily catch up on missed episodes — many of them are bingewatching anyway — show-runners can tell longer, more complicated stories with less repetition. The rewind button allows television to be a little more sophisticated. If you didn't understand the first time, just watch again.
But the spread of solitary, customized viewing will not mean the demise of television culture — quite the opposite. People will watch an episode and dissect it on Twitter; they will share their favorite scenes and watch them on repeat. "While viewing is becoming a more solitary, personal activity, the flip side is that fan communities have grown stronger," Jennifer Holt, a media scholar at the University of California at Santa Barbara, told me. "People still want to connect. They still want that social experience, only now it's all happening online."
This practice has expanded the dialogue between the makers and the consumers of television. "Show creators, writers and directors are now extremely sensitive to what the blogosphere is saying about their shows," Paris Barclay, the president of the Directors Guild of America, said a few years ago. Barclay, who has worked on shows such as "Glee," "Empire" and "Scandal," was wary of this development. "Some shows have become increasingly dull because taking risks with the show is discouraged," he said. "Audiences generally want to see a different version of the show that they love. They don't really want to see it become something else."
But as Jenkins, the media theorist, points out, creators have always adapted their work to suit who was listening. "Storytelling is a bardic medium," he said. "Bards like Homer would tell a story to a roomful of people and he would be attentive to what they liked and what they didn't like. The same was true of Dickens, whose novels were published serially. He changed plot points and characters on the fly."
Now that tools are making it increasingly easy to alter the flow of how we watch films and television, viewers will also have power to change the plot and the characters of a show to suit their own tastes. We should look forward to a future that involves more cross-pollination, more crazy fan-theories, more creative misunderstandings, all of it enabled by new ways of consuming television, whether that means binge-watching, surfing clips on social media or even watching on fast-forward. We risk transforming, perhaps permanently, the ways in which our brains perceive people, time, space, emotion. And isn't that marvelous?




ridiculus

Tja, mislim da već mnogi upotrebljavaju tu taktiku. Osim toga, većina ljudi će isto tako pročitati ovaj tekst.
Dok ima smrti, ima i nade.

Meho Krljic

What Air Conditioning Can Teach Us About Innovation and Laziness




A version of this post originally appeared on Tedium, a twice-weekly newsletter that hunts for the end of the long tail.

The 1902 invention of air conditioning has long been hailed as one of the most important inventions of the 20th century, one that has forever changed the way we live.
But as we hit an era when robots become more human than ever and try to steal our jobs, I find myself wondering—what can we learn from the additional freedom that air conditioning gave us?
Sure, it improved our society by making temperature something that we can control, but what about the problems that came with this cool breeze?
(To start with, the heavy energy use: Roughly 5 percent of US energy use can directly be attributed to air conditioning, according to Energy.gov. Roughly two-thirds of homes in the U.S. have some form of air conditioning, but globally, we've tended to be the exception ... until recently, that is, thanks to countries like China and India jumping on the trend within the past decade.)
Did it make us lazier or less capable as human beings. Did it make it so that we tend to cut corners a little more often?
And, considering we're on the cusp of society changing technologies with similar effects, can we prevent that from happening again?
Let's put this in architectural terms.

Air conditioning gave us skyscrapers, but did air conditioners make architects lazy?In a 2012 piece honoring the 110th anniversary of Willis Carrier's invention of air conditioning, Architect magazine honored the invention, but sounded a bit contrite about what Carter wrought. The piece featured an interview with The Land Institute's Stan Cox, who had recently written a book highlighting the technology's failings, and included this passage in the piece:>"We have become conditioned to air conditioning, to manufactured weather, and have abandoned the strategies that undeveloped and developing countries in hot climates still use. This is for good, certainly, or mostly: air conditioning makes for better economic productivity, and certainly helps preserve lives during heat waves. But in forgetting the ways that we used to cope with high temperatures, we may now be dependent on Carrier."There's a lot of good that came out of the ability to regulate the temperature of a room. We wouldn't have skyscrapers, clean rooms for building advanced computer chips, shopping malls, or multiplexes without air conditioning. But on the other hand, we might've had a little more creativity in our home and office design had we not kept it around.
See, prior to the air conditioner reaching homes around the country, architects had to think more creatively about keeping people cool when options were more limited. This meant taking advantage of breezes, room design, and dimensional layout in a way that maximized the heat when it was necessary kept things cool when it wasn't.
And it meant taking advantage of foliage around the home to build in some natural shade, as well as to build porches, which were often much cooler than the insides of homes during warm days.
A good example of architectural strategy in action is Thomas Jefferson's Monticello, perhaps the world's most famous building forged on passive cooling techniques. Built on a hilltop, the building took advantage of the natural breezes the location offered by having large windows and an open floor plan. And while the heat of the Virginia summer might have been sweltering, the building's brick design helped to keep the heat out of the home until the latter part of the day, when things were starting to cool down.


The American Institute of Architects, in a 1979 article published in its quarterly magazine, cited Jefferson's work as inspiring for modern architects, who essentially needed to be reintroduced to Jefferson's ideas a scant 35 years after air conditioning became common in homes.
"What is most remarkable about Monticello, though, is not that Jefferson's cooling strategies worked but the fact that they stand up so well today," author Kevin Green wrote. "Jefferson came by his cooling intuitively, not scientifically."
The fact that passive cooling needed to be reintroduced to architects in the first place sticks in the craw of some critics of modern air conditioning. Lloyd Alter, who has become perhaps one of the most famous critics of air conditioning as TreeHugger's managing editor, frequently pulls out a quote from Cameron Tonkinwise of Carnegie Mellon's School of Design that takes architects to task for this legacy of simplistic thinking:
"The window air conditioner allows architects to be lazy," Tonkinwise is quoted as saying. "We don't have to think about making a building work, because you can just buy a box."
Tonkinwise, and by extension Alter, have a point on this matter, and it can largely be seen in the design of modern homes compared to those from earlier generations. In the era of the McMansion, high ceilings, porches, and ample plantlife ultimately lose out.


— Steven Anderson, a former general, arguing to NPR that the U.S. government spent $20.2 billion on air conditioning in the war zones of Iraq and Afghanistan. Anderson's creative accounting, which first surfaced in 2011, included a lot of additional steps that most people who crank up the AC don't—including the building of infrastructure to get the fuel for the air conditioner where it needs to be. The Pentagon quickly denied Anderson's claims, and emphasized that its annual energy use was a much more modest $15 billion per year for the entire world, a cost that isn't limited to AC.Air conditioning and the philosophy of taking the easy way outThese days, the era of sustainable design is helping to bring back the ideas some of the earlier technology used—if you wanted to, you could take a free online course from Big Ass Fans right now that discusses the importance of air flow.
But it's important to note that it took us a while to get back to this point. Generations, in fact. A lot of the reason why it came back up as an issue in the first place is because of the need to fix some of the problems the original solution caused—particularly, the waste of energy and the fact that running an AC all the time costs a lot of money.
Considering that, I'd like to ponder an idea—what if we attempted to solve these larger problems created by new a new form of technology before it went into wide use? What if we thought these damn problems through?
In a lot of ways, it's arguably because something may be so groundbreaking that it represents the easy way out. There's a term for this, cognitive laziness, and it tends to explain a whole heckuva lot.
Blogger Michael Michalko, the author of the book Thinkertoys: A Handbook of Creative-Thinking Techniques and an expert on this kind of stuff, put this in some pretty stark terms in a 2012 blog post.
"One of the many ways in which we have become cognitively lazy is to accept our initial impression of the problem that [we encounter]. Once we settle on an initial perspective we don't seek alternative ways of looking at the problem," he wrote. "Like our first impressions of people, our initial perspective on problems and situations are apt to be narrow and superficial. We see no more than we expect to see based on our past experiences in life, education and work."
In other words, if we feel that we've found a solution to a problem, we're predisposed to putting that issue back on the shelf, as if it's not longer a big thing. It takes the exposure of completely different problems for us to even consider that the solution might be imperfect.
We bend to the will of the obvious solution.


This translates in a lot of contexts. For example, if you're signing up for a social network for the first time, and your options are to read the end user license agreement or hit the OK button. I don't know about you, but I'm probably OK with passing up on the opportunity to read a bunch of random stuff that separates me from some wacky new filters on my phone.
So it's hard to even get mad at architects who chose simple efficiency over complexity, or (to highlight a contemporary example) early carmakers that went with gasoline instead of something better for the environment. Because of human nature, it just makes sense that despite all the other advantages that came with air conditioning, the more challenging things that came with the invention—the fact that conservation and efficiency still have their place—didn't initially get their due.
Around this time last year, leading air conditioning critic Lloyd Alter, who I highlighted above, wrote a mea culpa on TreeHugger, making room for life with the air cranked up. Rather than arguing that air conditioning makes us lazy, he argued that living without air conditioning in the modern era was pretty much impossible in many places because of climate change.
"I have written before that air conditioners are like cars; they have changed our lives and we have built our cities around them," he wrote in a piece that frequently challenged his previous thinking on the issue. "Our houses and modern apartments are designed in such ways that they would be uninhabitable without air conditioning, as uninhabitable as our suburbs are without cars. The climate is changing and just making it hotter and harder to live without AC."
When CBC caught up with Alter soon after, he admitted that maybe his hard-line stance was off, especially considering the fact that building design has made it very hard to live any other way. "I realized I was being a hypocrite," he said.
He's right. After more than a hundred years of air-conditioned buildings, it's going to be very hard to change course, even if we wanted to at this point.
Let's face it: Deeply rooted innovation has a way of weakening our resolve, even in the best of us.



Meho Krljic

Tech 'utopia is creepy,' according to Nicholas Carr



Quote
Fully automated, self-driving cars are likely decades away from being a reality, says Nicholas Carr, the author whose books about technology and culture seek to curb the heady enthusiasm regarding the digitalization of everything.


"I think a lot of the visions of total automation assume that every vehicle will be automated and the entire driving infrastructure will not only be mapped in minute detail but will also be outfitted with the kind of sensors and transmitters and all of the networking infrastructure that we're going to need," Carr tells CIO.com. Autonomous car proponents and technology enthusiasts in general will certainly disagree with Carr.
No surprise there. In May 2003, the Harvard Business Review published Carr's book "IT Doesn't Matter," which raised the ire of technology luminaries such as Bill Gates and Carly Fiorina by challenging the notion that IT infrastructure provided enterprises with a strategic advantage. Most CIOs didn't care for it either, feeling as though their roles were being denigrated at a time when CEOs were beginning to understand that some technology, well, does matter. Carr has long since been vindicated. The utility computing model Carr described in 2003 became known as cloud computing. And with each SaaS app an enterprise CIO implements, software as a service becomes commoditized a bit more.
[ Related: Nicholas Carr: The Internet is Hurting Our Brains ] Now Carr is back with a new book, "Utopia is Creepy: And Other Provocations," which W.W. Norton & Co. is releasing on Sept. 6. It's a compendium of essays, from "Is Google Making Us Stupid," to "Life, Liberty, and the Pursuit of Privacy." Software may be eating the world, but one thing that perpetually eats at Carr is the irrational exuberance espoused by Silicon Valley, which preaches that technology is the answer to everything. Apps will usher in world peace and end world hunger. At the very least, they'll enable us to take a nap while tooling around in our API and lidar-fueled motor vehicles. Well, one of these decades, anyway.
CIO.com: What is the premise of "Utopia is Creepy?"
Nicholas Carr: Utopia is Creepy is a collection of pieces that I've written and posted to my Rough Type blog over the last dozen years. This is kind of Rough Type's greatest hits, as well as some essays that I wrote at the same time, including "Is Google Making Us Stupid," which is probably the best-known of those. When Rough Type hit its 10th anniversary in 2015 I started going back through the posts and I realized there were a lot of pieces that had some resonance.
I also started to see was that I'd given a blow-by-blow description of what was going on in the tech world, particularly the rise of what used to be called Web 2.0 and is now known as social media and social networking. It was also a critique of the Silicon Valley ideology -- the sense that the internet and social media was bringing down the barriers to self-expression, freeing people up and if we place our trust in Silicon Valley and the programmers there it would lead to a kind of utopia. It's a collection but one with a theme that runs through it. CIO.com: What are a couple of examples of this Silicon Valley irrational exuberance that comes to mind?
Carr: When we saw the resurrection of the internet after the big dot-com bust, there was this sense that the walls of old media and the gatekeepers were coming down and there was golden age of people in control of their own expression and what they read. This was a strong theme back then. There was Wired cover story called "We are the web" that presented this as a whole new world opening up. What we've seensince then has been very different. The old gatekeepers, to the extent they were gatekeepers, have been replaced by companies like Facebook and Google and companies that really now have become the new media companies and are very much controlling the flow of information.
If anything we're more deeply in the grip of the new media companies than we ever have been. Another example is a sense that we human beings need to get out of the way and let algorithms and robots take over because in some fashion -- at least this is how it's presented -- robots are more reliable and more perfect and faster than human beings. This is translated by Silicon Valley and others that we're all going to be freed up and won't have to work anymore and that by handing off our jobs to machines and computers and robots that we'll have all of this time and we'll be more creative.
CIO.com: What I'm hearing out of MIT, other academia and from analysts at Gartner and Forrester is that robots will augment and support humans in their work, rather than replace them outright.
Carr: There is a well-argued counter philosophy to this sense that ultimately artificial intelligence and robots will take over the bulk of labor. I still think that's a common theme coming out of Silicon Valley from people like Peter Thiel and Marc Andreessen, who quite explicitly say it may take a little time but ultimately, as Andreessen puts it, software is eating the world and we're going to figure out what happens in the post-work environment.
This is a dream that's been around since the Industrial Revolution -- that machines are going to take over -- and it never panned out. Which is not to say there aren't huge technological shifts in the labor force -- there are – but I think it's an expression of faith in computers to solve in a magical way these difficult problems that we're always going to be dealing with. The danger here is that often as a society we're prone to buying into this sense that if we just automate, for example, healthcare suddenly we'll have an efficient system and that will cure a lot of ills. And it has the effect too often of generating complacency, that we don't have to struggle with these things because technology is going to solve these problems.
CIO.com: I can see that. Every time we have a major machine learning breakthrough – the computer beating a human in Go comes to mind – the excitement fades as we're reminded of how far we are from true AI.
Carr: You have to suspend your disbelief when you're interacting with your computers or your smartphone or Siri to not recognize how far away we are [from AI]. Which is not to downplay our enormous advances. If you look at self-driving cars, particularly when Google announced in 2010 that it had a car running on well-mapped highways that was pretty darn good, that announcement came at a time when people said driving is one thing that computers could never take over because it's all of these tacit skills and instinct and intuition built up. Even there I think we're still further than most people think away from a fully automated vehicle. But nevertheless it if truly amazing what has happened.
CIO.com: If you listen to Google, Tesla and others the self-driving technology is far closer than the political reality. How far away are we from fully automated cars?
Carr: I think we can get up to 99 percent [of the technology necessary for fully autonomous vehicles] pretty quickly but is that going to be enough? I don't think it is. Because driving has so many uncertainties. It seems to me that in order to get to the point of full automation you're going to have all sorts of infrastructural changes. You're going to have to deal with the fact that the automotive fleet is very long lasting. It takes many years to turn over the cars that people drive, which means you're going to have fully automated vehicles, semi-automated vehicles and human-driven vehicles on the road at the same time and that becomes very, very complicated.
I think a lot of the visions of total automation assume that every vehicle will be automated and the entire driving infrastructure will not only be mapped in minute detail but will also be outfitted with the kind of sensors and transmitters and all of the networking infrastructure that we're going to need. And it just seems to me that it will take a long time -- decades. Having said that I think you can see areas where you can isolate automated vehicles. You can envision some kind of system where you automate long-haul trucking. And you can see a king of Google car serving as a taxi in certain situations. But I think this dream of everything being automated in the next 10 years is unrealistic. I could see all of these stages en route where different aspects become automated even if the whole system does not.
CIO.com: How do you see the role of the CIO evolving?
I see a dual role for CIOs now. As we move toward an era where you're no longer in control of the IT, such as managing the data center your company runs on, you become the broker between all of the IT capabilities, both in-house and outside – and you become a strategic broker in figuring out how we should get the right mix of capabilities and where they should come from. And I think that you need smart people to do that in companies. The other is a more strategic role of figuring out where we should invest that in a way that will give us competitive distinguish us. That's an important role whether that has to reside in a CIO-type role or it can move to other places in the company I think that's always going to be a certain tension.

Meho Krljic

EmDrive: Nasa Eagleworks' paper has finally passed peer review, says scientist in the know



Quote
An independent scientist has confirmed that the paper by scientists at the Nasa Eagleworks Laboratories on achieving thrust using highly controversial space propulsion technology EmDrive has passed peer review, and will soon be published by the American Institute of Aeronautics and Astronautics (AIAA).
Dr José Rodal posted on the Nasa Spaceflight forum – in a now-deleted comment – that the new paper will be entitled "Measurement of Impulsive Thrust from a Closed Radio Frequency Cavity in Vacuum" and is authored by "Harold White, Paul March, Lawrence, Vera, Sylvester, Brady and Bailey".


There is also a line of text that EmDrive enthusiasts believe could be from the paper's abstract, which reads: "Thrust data in mode shape TM212 at less than 8106 Torr environment, from forward, reverse and null tests suggests that the system is consistently performing with a thrust to power ratio of 1.2 +/- 0.1 mN/Kw ()".
Rodal also revealed that the paper will be published in the AIAA Journal of Propulsion and Power, a prominent journal published by the AIAA, which is one of the world's largest technical societies dedicated to aerospace innovations.
Eagleworks is an experimental lab that is to Nasa essentially what the secretive Google X "moonshot" R&D lab is to Google/Alphabet, and the space agency is not yet ready to place its official stamp of approval on a technology many still believe does not work.
Although Eagleworks engineer Paul March has posted several updates on the ongoing research to the Nasa Spaceflight forum showing that repeated tests conducted on the EmDrive in a vacuum successfully yielded thrust results that could not be explained by external interference, those in the international scientific community who doubt the feasibility of the technology have long believed real results of thrust by Eagleworks would never see the light of day.
In March, the same Eagleworks engineer announced that their paper was going through the peer review process, but they had no idea when it would be published as "peer reviews are glacially slow".
However, it seems that the wait might finally be over, if Rodal, a long-time supporter of the EmDrive who is building his own version of the Nasa aluminium frustum, is to be believed.


Roger Shawyer is the British scientist who first proposed the concept of EmDrive in 1999, and was ridiculed and even accused of fraud by some in the international space community despite his work being funded by the UK government and licenced by Boeing.How the EmDrive works
The EmDrive is the invention of British scientist Roger Shawyer, who proposed in 1999 that based on the theory of special relativity, electricity converted into microwaves and fired within a closed cone-shaped cavity causes the microwave particles to exert more force on the flat surface at the large end of the cone (i.e. there is less combined particle momentum at the narrow end due to a reduction in group particle velocity), thereby generating thrust.
His critics say that according to the law of conservation of momentum, his theory cannot work as in order for a thruster to gain momentum in one direction, a propellant must be expelled in the opposite direction, and the EmDrive is a closed system.
However, Shawyer claims that following fundamental physics involving the theory of special relativity, the EmDrive does in fact preserve the law of conservation of momentum and energy.Shawyer believes EmDrive could transform the aerospace industry and potentially solve both the energy crisis and climate change, while also speeding up space travel by making it much cheaper to launch satellites and spacecraft into orbit. He is also actively working to test the technology out on unmanned aerial vehicles (UAV), in the hope of creating feasible flying cars.
He told IBTimes UK that he is as excited as other EmDrive enthusiasts to read the upcoming paper by Nasa Eagleworks, but adds that any thrust measured will be very small, probably equivalent to where Shawyer's research was a decade ago.
Incidentally, the 10-year-long rule about classifying research done for the UK government has now expired, and Shawyer has made four papers publicly accessible on his website for anyone to read.
"I daresay America will have a lot to say about it, but it's not really new. It's all been done before 10 years ago. If you bother to go through the [declassified] papers, you can see the levels of thrust we achieved are significantly higher than the levels of thrust that Nasa Eagleworks has got," he said.
"People all around the world have been measuring thrust. You've got guys building them in their garages and very large organisations building cavities too. They're all generating thrust, there's no great mystery. People think it's black magic or something, but it's not. Any physicist worth his salt should understand how it works, or if they don't, they should change their profession."


Shawyer is now actively working on the second-generation EmDrive with an unnamed UK aerospace company and the new device is meant to be able to achieve tonnes of thrust (1T = 1,000kg), rather than just a few grams.
"We're trying to achieve thrust levels that go up by many orders of magnitude, where the q values of the cavities are between 1 x 109 and 5 x 104. Once you reach the levels of thrust we anticipate we will reach, you can apply it anywhere," he told IBTimes UK. "Essentially, anything that currently flies or drives or floats can use EmDrive technology."
Despite the increasing interest in EmDrive, the fact that he will soon be vindicated and how in July 2015 his own paper describing space propulsion on drones passed peer review, Shawyer does not plan to release anymore papers for quite a while.
"The work we're doing is difficult and expensive and the people paying for it don't necessarily want to give it away to the rest of the world, but EmDrive will make a huge impact and a lot of people have thought of a lot of things to apply it to," he said.
Recently, several scientists including Dr Mike McCulloch of Plymouth University and Dr Arto Annila from the University of Helsinki have been using exotic physics to try to explain why the EmDrive works, and they have seen that the thrust detected in experiments by Shawyer and other scientists matches their theoretical calculations.
IBTimes UK has contacted AIAA for a statement, but had received no response as of press time.

Meho Krljic

China Confirms Its Space Station Is Falling Back to Earth

Quote
In a press conference on Wednesday, Chinese officials appear to have confirmed what many observers have long suspected: that China is no longer in control of its space station.
China's Tiangong-1 space station has been orbiting the planet for about 5 years now, but recently it was decommissioned and the Chinese astronauts returned to the surface. In a press conference last week, China announced that the space station would be falling back to earth at some point in late 2017.
Normally, a decommissioned satellite or space station would be retired by forcing it to burn up in the atmosphere. This type of burn is controlled, and most satellite re-entries are scheduled to burn up over the ocean to avoid endangering people. However, it seems that China's space agency is not sure exactly when Tiangong-1 will re-enter the atmosphere, which implies that the station has been damaged somehow and China is no longer able to control it.
This is important because it means Tiangong-1 won't be able to burn up in a controlled manner. All we know is it will burn up at some point in late 2017, but it is impossible to predict exactly when or where. This means that there is a chance debris from the falling spacecraft could strike a populated area.
Fortunately, it's unlikely anyone will be injured. Most of the parts of the space station will burn up in the atmosphere, and the few that do make it to the ground probably won't land in any populated areas. (It's a big planet.) Still, watch the skies late next year. You never know what could be falling down on you.

zakk

Šta padne na moju njivu, moje je!
Why shouldn't things be largely absurd, futile, and transitory? They are so, and we are so, and they and we go very well together.

Meho Krljic

Samo da ne zapali žito.

Mica Milovanovic

QuoteŠta padne na moju njivu, moje je!


Ua, zemljoposednici...
Mica

Aco Popara Zver

Коме год упадне доћи ће му министар стефановић са специјалцима.
šta će mi bogatstvo i svecka slava sva kada mora umreti lepa Nirdala



Meho Krljic

Tomorrow's Wars Will Be Livestreamed



Quote
Hundreds of thousands of people around the world watched the start of the invasion of Mosul, a city held by ISIS in Iraq, live on Facebook and YouTube this morning.
The most popular stream—there were several, some of which are still live—was shared by Kurdish outlet Rudaw and re-posted by outlets like the Washington Post and Channel 4 in the UK. While some viewers commented on the merits of the offensive, for others, the livestream itself was the most startling thing. As angry cartoon faces and "Wow!" emoticons floated over top of live images of war, viewers noted that it all seemed like a bit too much like a sci-fi fever dream about a war-obsessed culture.
For most English-language viewers watching these streams, there was no explanation, no given context, no subtitles or translation—merely images of a mostly-barren foreign landscape peppered with men and trucks, idling and standing around, sparsely punctuated by violence. And in this void, commenters cried, "WHY THERE IS NO SHOOTING, EXPLOSION. I WANT TO WATCH A WAR," cue the smiley face.
Read More: Autoplay Is for a G-Rated World
But which war? Perhaps the many fictionalized encounters in movies, videogames, and television, in places with names real and imagined, not that it really matters if the latest Call of Duty takes place in Afghanistan or outer space?
"War is 99 percent boredom and one percent sheer excitement and/or terror," David Axe, war correspondent and editor of news site War Is Boring, wrote me in an email. "You hurry up and wait, move slowly, advance, hold ground, resupply, update your plan and intel, rest, eat, refuel, advance again, etc., etc. It can be tedious. Until the fighting starts. And then it's not tedious at all. But in reality, war is in the inverse of action movies."
The effect of watching the livestream of the Mosul offensive without any context or explanation is similar to what I felt when I watched Werner Herzog's 1992 film Lessons of Darkness. In it, Herzog shows the viewer slow-tracking footage of burning Kuwaiti oil fields after the Gulf War, the outcome of the army retreat's "scorched earth" policy.
But Herzog doesn't give us any political or historical context; he doesn't say it's in Kuwait, or even on Earth. "The film has not a single frame that can be recognized as our planet, and yet we know it must have been shot here," he said of the film. His intention was to present conflict in metaphysical terms, and to probe deeper than newscasts do despite a nightly deluge of facts and minutiae. From the semantic void of images that could be from any war emerges a deeper understanding about the nature of war itself.
But in 2016, decades after Lessons of Darkness was completed and on social media instead of in a darkened arthouse theatre, the void spits out something other than deep, metaphysical understanding about human nature. Instead, in the comments, people ask for money. They talk about porn. They quote Green Day lyrics. They call people "cucks."
To be fair, however, not everyone reacted this way. But a lot of people did.
"There's journalistic value in the livestream," Axe wrote, and noted that it's generally good practice to ignore the comments anywhere on the web.
But as the 2016 election cycle has shown us, a messy comments section come to terrifying life isn't a bad descriptor of the current US body politic.
In the end, livestreaming a war—if one judges by the live comments—looks a lot like livestreaming a concert or any other event. Does the low overhead of sharing someone else's stream on your outlet's page coupled with the high viewer payoff almost ensure that we're going to see it again? "Absolutely yes," Axe wrote.

Meho Krljic

Jedna od velikih prednosti hibridnih i električnih automobila je što je nivo i zvučnog zagađenja u njihovom slučaju osetno niži nego kod klasičnih vozila koja gore dinosauruse (copyright by Scallop). Naravno, ništa što je lepo nije i večno (sem Nine Hartley) pa se već pripremaju propisi po kojima će ova vozila morati da imaju izvor glasnog zvuka kada idu malim brzinama kako pešaci zagledani u svoje telefone ili generalno nedovoljno pažljivi ne bi ginuli u jatima.

zakk

Nešto prošla pored mene i vest da negde uvode i zvučnu saobraćajnu signalizaciju za zablentavljene pešake (tipa ULICA, STANI, MAJMUNE). Ne znam...
Why shouldn't things be largely absurd, futile, and transitory? They are so, and we are so, and they and we go very well together.

Meho Krljic

 For the first time, living cells have formed carbon-silicon bonds  
Quote
   Scientists have managed to coax living cells into making carbon-silicon bonds, demonstrating for the first time that nature can incorporate silicon - one of the most abundant elements on Earth - into the building blocks of life.
While chemists have achieved carbon-silicon bonds before - they're found in everything from paints and semiconductors to computer and TV screens - they've so far never been found in nature, and these new cells could help us understand more about the possibility of silicon-based life elsewhere in the Universe.
  After oxygen, silicon is the second most abundant element in Earth's crust, and yet it has nothing to do with biological life.
Why silicon has never be incorporated into any kind of biochemistry on Earth has been a long-standing puzzle for scientists, because, in theory, it would have been just as easy for silicon-based lifeforms to have evolved on our planet as the carbon-based ones we know and love.
Not only are carbon and silicon both extremely abundant in Earth's crust - they're also very similar in their chemical make-up.
One of the most important features carbon and silicon share is the ability to form bonds with four atoms at the same time. This means they're capable of linking together the long chains of molecules needed to form the basis of life as we know it - proteins and DNA.
And yet, silicon-based lifeforms do not exist outside the Star Trek universe - as far as we know.
"No living organism is known to put silicon-carbon bonds together, even though silicon is so abundant, all around us, in rocks and all over the beach," says one of the researchers, Jennifer Kan from Caltech.
  To be clear, Kan and her team played a big role in getting living cells to achieve carbon-silicon bonds - this was not something that the cell could have easily done on its own.
But the experiment is proof that these bonds can be formed in nature - so long as you have the right conditions.
The researchers started by isolating a protein that occurs naturally in the bacterium Rhodothermus marinus, which thrives in the hot springs of Iceland.
They liked this protein, called cytochrome c enzyme, because while its main role is to transport electrons through the cells, lab tests revealed that it could facilitate the kinds of bonds that could attach silicon atoms to carbon.
After isolating the protein, they inserted the gene for it into some E. coli bacteria to if it could facilitate the production of carbon-silicon bonds inside its living cells.
The first iteration of these silicon-engineered bacteria didn't do much, but the team continued to mutate the protein gene within a specific region of the E. coli genome until something very cool happened.
"After three rounds of mutations, the protein could bond silicon to carbon 15 times more efficiently than any synthetic catalyst," Aviva Rutkin reports for New Scientist.
The fact that this bacterium has been engineered to produce carbon-silicon bonds more efficiently than chemists can in the lab is exciting for two reasons. First, it offers up a better way to produce the carbon-silicon bonds we need to make things like pharmaceuticals, agricultural chemicals, and fuels.
"This is something that people talk about, dream about, wonder about," Annaliese Franz from the University of California, Davis, who wasn't involved in the research, told New Scientist.
"Any pharmaceutical chemist could read this on Thursday and on Friday decide they want to take this as a building block that they could potentially use."
Secondly, it signifies that a lifeform could potentially be at least partially based on silicon, and if the researchers continue to grow these kinds of bacteria, we could get a better understanding of what they could look like.
"This study shows how quickly nature can adapt to new challenges," one of the team, Frances Arnold, said in a press statement.
"The DNA-encoded catalytic machinery of the cell can rapidly learn to promote new chemical reactions when we provide new reagents and the appropriate incentive in the form of artificial selection. Nature could have done this herself if she cared to."
The research has been published in Science. 

lilit

do androids dream of ... electric hearts? :lol:

holly shit, ovi napreduju!

http://youtu.be/-D_XrRo0h20
That's how it is with people. Nobody cares how it works as long as it works.