interesting tech stories

interesting tech stories

Hello Guest! Sign up to join the discussion below...
Page 1 of 97 1 2 3 11 51 ... LastLast
Results 1 to 10 of 964
Thank Tree792Thanks

This is a discussion on interesting tech stories within the Science and Technology forums, part of the Topics of Interest category; arstechnica.com Andrew Cunningham - 3/31/2017, 1:45 PM Next-generation DDR5 RAM will double the speed of DDR4 in 2018 You may ...

  1. #1

    interesting tech stories

    arstechnica.comAndrew Cunningham - 3/31/2017, 1:45 PM
    Next-generation DDR5 RAM will double the speed of DDR4 in 2018

    You may have just upgraded your computer to use DDR4 recently or you may still be using DDR3, but in either case, nothing stays new forever. JEDEC, the organization in charge of defining new standards for computer memory, says that it will be demoing the next-generation DDR5 standard in June of this year and finalizing the standard sometime in 2018. DDR5 promises double the memory bandwidth and density of DDR4, and JEDEC says it will also be more power-efficient, though the organization didn't release any specific numbers or targets.

    Like DDR4 back when it was announced, it will still be several years before any of us have DDR5 RAM in our systems. That's partly because the memory controllers in processors and SoCs need to be updated to support DDR5, and these chips normally take two or three years to design from start to finish. DDR4 RAM was finalized in 2012, but it didn't begin to go mainstream until 2015 when consumer processors from Intel and others added support for it.

    DDR5 has no relation to GDDR5, a separate decade-old memory standard used for graphics cards and game consoles.

    RAM isn't going anywhere in the near future, but if you look ahead a few years, you can see a potentially RAM-free future looming. Intel's Optane drives are attempting to combine the capacity, density, and non-volatility of an SSD with speed that is beginning to approach RAM's—first-generation Optane drives have about 10 times higher latency according to Intel, but that's not so far off the mark for many workloads. As the technology improves, it may well remove the need for separate pools of RAM and storage, triggering a fundamental shift in the way that computing devices work.

    For now, though, bring on the faster RAM.
    The red spirit thanked this post.



  2. #2

    There Are 30,000 Particle Accelerators In The World; What Do They All Do?!


  3. #3

    Why Intel Insists Rumors Of The Demise Of Moore’s Law Are Greatly Exaggerated

    fastcompany.comJared Newman

    For Intel, it was an important announcement: Moore’s law is not dead.

    Before a gaggle of tech writers and analysts, company officials last week displayed charts and illustrations showing how transistor density on an integrated circuit continues to double every two years—consistent with a prediction made more than five decades ago by Intel cofounder Gordon Moore.

    The intended message: Intel hasn’t lost its zeal for big leaps in computing, even as it changes the way it introduces new chips, and branches beyond the PC processor into other areas like computer vision and the internet of things.

    “Number one, too many people have been writing about the end of Moore’s law, and we have to correct that misimpression,” Mark Bohr, Intel’s technology and manufacturing group senior fellow and director of process architecture and integration, says in an interview. “And number two, Intel has developed some pretty compelling technologies … that not only prove that Moore’s law is still alive, but that it’s going to continue to provide the best benefits of density, cost performance, and power.”

    But while Moore’s law soldiers on, it’s no longer associated with the types of performance gains Intel was making 10 to 20 years ago. The practical benefits of Moore’s law are not what they used to be.

    Slower steps, bigger leaps

    For each new generation of microprocessor, Intel used to adhere to a two-step cycle, called the “tick-tock.” The “tick” is where Moore’s law takes effect, using a new manufacturing process to shrink the size of each transistor and pack more of them onto a chip. The subsequent “tock” introduces a new microarchitecture, which yields further performance improvements by optimizing how the chip carries out instructions. Intel would typically go through this cycle once every two years.

    But in recent years, shrinking the size of transistors has become more challenging, and in 2016, Intel made a major change. The latest 14 nm process added a third “optimization” step after the architectural change, with modest performance improvements and new features such as 4K HDR video support. And in January, Intel said it would add a fourth optimization step, stretching the cycle out even further. The move to a 10 nm process won’t happen until the second half of 2017, three years after the last “tick,” and Intel expects the new four-step process to repeat itself.

    This “hyper scaling” allows computing power to continue to increase while needing fewer changes in the manufacturing process. If you divide the number of transistors in Intel’s current tick by the surface area of two common logic cells, the rate of improvement still equals out to more than double every two years, keeping Moore’s law on track.

    Intel is making larger, but less frequent, improvements.“Yes, they’ve taken longer, but we’ve taken bigger steps,” Bohr said during his three-hour presentation.

    Why didn’t anyone draw this seemingly obvious conclusion sooner? Intel says its competitors in the semiconductor fabrication business have been muddying the waters, either using metrics that can’t be quantified, or moving to new manufacturing processes despite smaller gains in transistor size. With the metric Intel is now using, the company estimates that competitors are about three years behind in transistor density. And while chip makers do have a tendency to favor metrics that look flattering, Bohr says its methodology has been proposed by others in the industry before, and it can be verified by third parties.

    “If company A doesn’t want to report their density using that metric, other companies, including a company that does reverse-engineering reports, can easily do the necessary measurements and report on that number,” Bohr says.

    Gartner analyst Sam Wang agrees that Intel is being fair in its methodology, though he’s not sure whether the rest of the industry will follow along. He says it’s up to the media and analysts to popularize the measurement Intel is pushing.

    “If all foundries agree to adopt this method, that would be a great comparison,” he says.

    Moore’s Law in the real world

    Despite the continued advancement of Moore’s law, the reality is that Intel’s processors aren’t making the performance leaps that they used to, and that was true even before the end of the tick-tock cycle. This chart from Elsevier is illustrative, showing that around 2003, processor performance gains dropped from 52% per year to 22% per year. Intel’s current 7th-generation Core i7 processor is a 15% improvement over the previous generation, and the upcoming 8th-generation Core i7 is expected to be similar.
    Intel CPU performance over past, current, and future generationsBut while the 1990s and early aughts were a period of rapid growth in processing power due to innovations in process technology and microarchitecture, Moore’s law in itself does not weigh in on any particular performance improvement.

    “Just to go back to the original papers by Gordon Moore, his prediction was that transistor density would double every two years. He didn’t make any statement about performance,” Bohr says.

    Still, aren’t more powerful processors the point of this whole exercise? Not exactly. The real benefit of Moore’s law is to drive down the cost per transistor at a consistent rate. This does allow Intel to improve performance by adding more transistors as a way of improving performance, but it also just decreases the cost to produce new products. While Intel doesn’t disclose its exact cost per transistor, the company points out that cost declines are the same now as they were under the tick-tock cycle. And yes, there’s a graph to prove it.



    Intel says cost per transistor is on a steady decline.For present-day Intel, the main benefits of Moore’s law are probably more attractive than the incidental performance gains of 20 years ago. Amid up-and-down PC sales, Intel has been moving into other businesses, such as wearables and other small-scale connected products, and wants to take on Nvidia in AI for self-driving cars. Last year, the company also made a deal with ARM–whose architecture underpins most smartphone and tablet processors–to produce new chips on Intel’s upcoming 10 nm manufacturing process.

    Adherence to Moore’s law, then, is no longer just about keeping Intel-powered PCs on the cutting edge.

    “Following Moore’s law in terms of delivering better transistor density, lower transistor costs, and improved transistor performance and power, those benefits really apply to all product lines,” Bohr says, “whether we’re talking about a server part, a client part, or a low-power mobile part.”
    Pifanjr and The red spirit thanked this post.

  4. Remove Advertisements
    PersonalityCafe.com
    Advertisements
     

  5. #4

    Companies start implanting microchips into workers' bodies

    latimes.comAssociated Press

    The syringe slides in between the thumb and index finger. Then, with a click, a microchip is injected in the employee's hand. Another “cyborg” is created.

    What could pass for a dystopian vision of the workplace is almost routine at the Swedish start-up hub Epicenter. The company offers to implant its workers and start-up members with microchips the size of grains of rice that function as swipe cards: to open doors, operate printers or buy smoothies with a wave of the hand.

    “The biggest benefit, I think, is convenience,” said Patrick Mesterton, co-founder and chief executive of Epicenter. As a demonstration, he unlocks a door merely by waving near it. “It basically replaces a lot of things you have, other communication devices, whether it be credit cards or keys.”

    The technology itself is not new: Such chips are used as virtual collar plates for pets, and companies use them to track deliveries. But never before has the technology been used to tag employees on a broad scale. Epicenter and a handful of other companies are the first to make chip implants broadly available.

    And as with most new technologies, it raises security and privacy issues. Although the chips are biologically safe, the data they generate can show how often employees come to work or what they buy. Unlike company swipe cards or smartphones, which can generate the same data, people cannot easily separate themselves from the chips.

    “Of course, putting things into your body is quite a big step to do, and it was even for me at first,” said Mesterton, saying he initially had his doubts.

    “On the other hand, I mean, people have been implanting things into their body, like pacemakers and stuff to control your heart,” he said. “That's a way, way more serious thing than having a small chip that can actually communicate with devices.”

    Epicenter, which is home to more than 100 companies and roughly 2,000 workers, began implanting workers in January 2015. Now, about 150 workers have the chips. A company based in Belgium also offers its employees such implants, and there are isolated cases around the world in which tech enthusiasts have tried them out in recent years.

    The small implants use near-field communication technology, or NFC, the same as in contactless credit cards or mobile payments. When activated by a reader a few inches away, a small amount of data flows between the two devices via electromagnetic waves. The implants are “passive,” meaning they contain information that other devices can read, but cannot read information themselves.

    Ben Libberton, a microbiologist at Stockholm's Karolinska Institute, says hackers could conceivably gain huge swaths of information from embedded microchips. The ethical dilemmas will become bigger the more sophisticated the microchips become.

    “The data that you could possibly get from a chip that is embedded in your body is a lot different from the data that you can get from a smartphone,” he says. “Conceptually, you could get data about your health, you could get data about your whereabouts, how often you're working, how long you're working, if you're taking toilet breaks and things like that.”

    Libberton said that if such information is collected, the big question remains of what happens to it, who uses it and for what purpose.

    So far, Epicenter's group of cyborgs doesn't seem too concerned.

    “People ask me, ‘Are you chipped?’ and I say, ‘Yes, why not?’” said Fredric Kaijser, the 47-year-old chief experience officer at Epicenter. “And they all get excited about privacy issues and what that means and so forth. And for me it's just a matter of I like to try new things and just see it as more of an enabler and what that would bring into the future.”

    Epicenter workers stage monthly events where attendees can receive the implant.

    That means visits from self-described “body hacker” Jowan Osterlund from Biohax Sweden who performs the “operation.”

    He injects the implants — using pre-loaded syringes — into the fleshy area of the hand, just next to the thumb. The process lasts a few seconds, and more often than not, there are no screams and barely a drop of blood. “The next step for electronics is to move into the body,” he says.

    Sandra Haglof, 25, who works for Eventomatic, an events company that works with Epicenter, has had three piercings before, and her left hand barely shakes as Osterlund injects the chip.

    “I want to be part of the future,” she laughs.

  6. #5

    'Arctic World Archive' Will Keep the World's Data Safe In an Arctic Mineshaft

    news.slashdot.orgPosted by Slashdot



    An anonymous reader quotes a report from The Verge:
    Norway's famous doomsday seed vault is getting a new neighbor. It's called the Arctic World Archive, and it aims to do for data what the Svalbard Global Seed Vault has done for crop samples -- provide a remote, impregnable home in the Arctic permafrost, safe from threats like natural disaster and global conflicts. But while the Global Seed Vault is (partially) funded by charities who want to preserve global crop diversity, the World Archive is a for-profit business, created by Norwegian tech company Piql and Norway's state mining company SNSK. The Archive was opened on March 27th this year, with the first customers -- the governments of Brazil, Mexico, and Norway -- depositing copies of various historical documents in the vault. Data is stored in the World Archive on optical film specially developed for the task by Piql. (And, yes, the company name is a pun on the word pickle, as in preserving-in-vinegar.) The company started life in 2002 making video formats that bridged analog film and digital media, but as the world went fully digital it adapted its technology for the task of long-term storage. As Piql founder Rune Bjerkestrand tells The Verge: "Film is an optical medium, so what we do is, we take files of any kind of data -- documents, PDFs, JPGs, TIFFs -- and we convert that into big, high-density QR codes. Our QR codes are massive, and very high resolution; we use greyscale to get more data into every code. And in this way we convert a visual storage medium, film, into a digital one." Once data is imprinted on film, the reels are stored in a converted mineshaft in the Arctic archipelago of Svalbard. The mineshaft (different to the one used by the Global Seed Vault) was originally operated by SNSK for the mining of coal, but was abandoned in 1995. The vault is 300 meters below the ground and impervious to both nuclear attacks and EMPs. Piql claims its proprietary film format will store data safely for at least 500 years, and maybe as long as 1,000 years, with the assistance of the mine's climate.

  7. #6

    Google says its AI chips smoke CPUs, GPUs in performance tests

    pcworld.comBy Blair Hanley Frank U.S. Correspondent, IDG News Service | Apr 5, 2017 12:25 PM PTHome

    Credit: Google Four years ago, Google was faced with a conundrum: if all its users hit its voice recognition services for three minutes a day, the company would need to double the number of data centers just to handle all of the requests to the machine learning system powering those services.

    Rather than buy a bunch of new real estate and servers just for that purpose, the company embarked on a journey to create dedicated hardware for running machine- learning applications like voice recognition.

    The result was the Tensor Processing Unit (TPU), a chip that is designed to accelerate the inference stage of deep neural networks. Google published a paper on Wednesday laying out the performance gains the company saw over comparable CPUs and GPUs, both in terms of raw power and the performance per watt of power consumed.

    A TPU was on average 15 to 30 times faster at the machine learning inference tasks tested than a comparable server-class Intel Haswell CPU or Nvidia K80 GPU. Importantly, the performance per watt of the TPU was 25 to 80 times better than what Google found with the CPU and GPU.

    Driving this sort of performance increase is important for Google, considering the company’s emphasis on building machine learning applications. The gains validate the company’s focus on building machine learning hardware at a time when it’s harder to get massive performance boosts from traditional silicon.

    This is more than just an academic exercise. Google has used TPUs in its data centers since 2015 and they’ve been put to use improving the performance of applications including translation and image recognition. The TPUs are particularly useful when it comes to energy efficiency, which is an important metric related to the cost of using hardware at massive scale.

    One of the other key metrics for Google’s purposes is latency, which is where the TPUs excel compared to other silicon options. Norm Jouppi, a distinguished hardware engineer at Google, said that machine learning systems need to respond quickly in order to provide a good user experience.

    “The point is, the internet takes time, so if you’re using an internet-based server, it takes time to get from your device to the cloud, it takes time to get back,” Jouppi said.

    “Networking and various things in the cloud — in the data center — they takes some time. So that doesn’t leave a lot of [time] if you want near-instantaneous responses.”

    Google tested the chips on six different neural network inference applications, representing 95 percent of all such applications in Google’s data centers. The applications tested include DeepMind AlphaGo, the system that defeated Lee Sedol at Go in a five-game match last year.

    The company tested the TPUs against hardware that was released around roughly the same time to try and get an apples-to-apples performance comparison. It's possible that newer hardware would at least narrow the performance gap.

    There’s still room for TPUs to improve, too. Using the GDDR5 memory that’s present in an Nvidia K80 GPU with the TPU should provide a performance improvement over the existing configuration that Google tested. According to the company’s research, the performance of several applications was constrained by memory bandwidth.

    Furthermore, the authors of Google’s paper claim that there’s room for additional software optimization to increase performance. The authors called out one of the tested convolutional neural network applications (referred to in the paper as CNN1) as a candidate. However, because of existing performance gains from the use of TPUs, it’s not clear if those optimizations will take place.

    While neural networks mimic the way neurons transmit information in humans, CNNs are modeled specifically on how the brain processes visual information.

    “As CNN1 currently runs more than 70 times faster on the TPU than the CPU, the CNN1 developers are already very happy, so it’s not clear whether or when such optimizations would be performed,” the authors wrote.

    TPUs are what’s known in chip lingo as an application-specific integrated circuit (ASIC). They’re custom silicon built for one task, with an instruction set hard-coded into the chip itself. Jouppi said that he wasn’t overly concerned by that, and pointed out that the TPUs are flexible enough to handle changes in machine learning models.

    “It’s not like it was designed for one model, and if someone comes up with a new model, we’d have to junk our chips or anything like that,” he said.

    Google isn’t the only company focused on using dedicated hardware for machine learning. Jouppi said that he knows of several startups working in the space, and Microsoft has deployed a fleet of field-programmable gate arrays in its data centers to accelerate networking and machine learning applications.

    Blair Hanley Frank is primarily focused on the public cloud, productivity and operating systems businesses for the IDG News Service.
    Pifanjr thanked this post.

  8. #7

    Google's Tensor Processing Unit could advance Moore's Law 7 years into the future

    pcworld.com

    By Gordon Mah Ung Executive Editor, PCWorld | May 18, 2016 2:08 PM PT

    Google unveils a custom chip, which it says advances computing performance by three generations.

    Credit: Google Forget the CPU, GPU, and FPGA, Google says its Tensor Processing Unit, or TPU, advances machine learning capability by a factor of three generations.

    “TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA,” said Google CEO Sundar Pichai during the company’s I/O developer conference on Wednesday.

    TPUs have been a closely guarded secret of Google, but Pichai said the chips powered the AlphaGo computer that beat Lee Sedol, the world champion in the incredibly complicated game called Go.

    Pichai didn’t go into details of the Tensor Processing Unit but the company did disclose a little more information in a blog posted on the same day as Pichai’s revelation.

    “We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law),” the blog said. “TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation.

    Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly.”

    The tiny TPU can fit into a hard drive slot within the data center rack and has already been powering RankBrain and Street View, the blog said.

    Don’t give up on the CPU or GPU just yet

    What isn’t known is what exactly the TPU is. SGI had a commercial product called a Tensor Processing Unit in its workstations in the early 2000s that appears to have been a Digital Signal Processor, or DSP. A DSP is a dedicated chip that does a repetitive, simple task extremely quickly and efficiently. But according to Google there's no connection.

    Analyst Patrick Moorhead of Moore Insights & Strategy, who attended the I/O developer conference, said, from what little Google has revealed about the TPU, he doesn’t think the company is about to abandon traditional CPUs and GPUs just yet.

    “It’s not doing the teaching or learning,” he said of the TPU. “It’s doing the production or playback.”

    Moorhead said he believes the TPU could be a form of chip that implements the machine learning algorithms that are crafted using more power hungry GPUs and CPUs.

    As to Google’s claim that the TPU’s performance is akin to accelerating Moore’s Law by seven years, he doesn’t doubt it. He sees it as similar to the relationship between a traditional ASIC (application-specific integrated circuit) and a CPU.

    ASICs are hard-coded, highly optimized chips that do one thing really well. They can’t be changed like an FPGA but offer huge performance benefits. He likened the comparison to decoding an H.265 video stream with a CPU versus an ASIC built for that task. A CPU without dedicated circuits would consume far more power than the ASIC at that job.

    One issue with ASICs, though, is their cost and their permanent nature, he said. The only way to change the algorithm, if required by a bug or improvement, is to make a new chip. There's no reprogramming. That’s why ASICs have been traditionally been relegated to entities with unlimited budgets, like governments.
    Google said it has been secretly using its TPU to make applications like Street View smarter.

    One of founding fathers of hardcore tech reporting, Gordon has been covering PCs and components since 1998.

  9. #8

    Hyperloop One Unveils its Vision for America, Details 11 Routes as Part of Global Cha

    Skip to main contentApr 06 2017
    Hyperloop One Unveils its Vision for America, Details 11 Routes as Part of Global Challenge

    Company Announces Completion of Tube Installation at Las Vegas DevLoop, World’s First Full-System Test Track

    Routes Include: Boston-Somerset-Providence, Cheyenne-Houston, Chicago-Columbus-Pittsburgh, Denver-Colorado Springs, Denver-Vail, Kansas City-St. Louis, Los Angeles-San Diego, Miami-Orlando, Reno-Las Vegas, Seattle-Portland; and Dallas/Fort Worth-Austin-San Antonio-Houston

    WASHINGTON, DC. Apr. 6, 2017 – Executives from Hyperloop One joined leading policymakers and transportation experts here today to reveal details of select Hyperloop routes in the United States and to initiate a nationwide conversation about the future of American transportation.

    Of more than 2,600 participants in the Hyperloop One Global Challenge, 11 teams presented routes, linking 35 cities and covering more than 2,800 miles. They join 24 other teams from around the globe, each vying to be among 12 finalists. Three eventual winners will work closely with Hyperloop One engineering and business development teams to explore project development and financing.

    "Hyperloop One is the only company in the world building an operational commercial Hyperloop system,” said Rob Lloyd, chief executive officer of Hyperloop One. “This disruptive technology – conceived, developed and built in the U.S. – will move passengers and cargo faster, cleaner and more efficiently. It will transform transportation as we know it and create a more connected world.”

    Lloyd said that by year’s end the company will have a team of 500 engineers, fabricators, scientists and other employees dedicated to bringing the technology to life. Hyperloop One, he said, will enable broad benefits across communities and markets, support sustainable manufacturing and supply chains, ease strain on existing infrastructure and improve the way millions live and work.

    With Hyperloop One, passengers and cargo are loaded into a pod and accelerate gradually via electric propulsion through a low-pressure tube. The pod quickly lifts above the track using magnetic levitation and glides at airline speeds for long distances due to ultra-low aerodynamic drag. This week, the company finalized the tube installation on its 1640-foot-long DevLoop, located in the desert outside of Las Vegas; the facility serves as an outdoor lab for its proprietary levitation, propulsion, vacuum and control technologies.

    "The U.S. has always been a global innovation vanguard – driving advancements in computing, communication and media to rail, automobiles and aeronautics,” said Shervin Pishevar, executive chairman of Hyperloop One. “Now, with Hyperloop One, we are on the brink of the first great breakthrough in transportation technology of the 21st century, eliminating the barriers of time and distance and unlocking vast economic opportunities.”

    “Hyperloop One is the American Dream, and it’s fast becoming an American reality,” Pishevar said.

    The Hyperloop One Global Challenge
    The Hyperloop One Global Challenge kicked off in May 2016 as an open call to individuals, universities, companies and governments to develop comprehensive proposals for deploying Hyperloop One's transport technology in their region. Five of the proposals – including those from Texas, Florida, Colorado, Nevada and Missouri – involve officials from their state Departments of Transportation.

    Proposed routes that would greatly reduce passenger and cargo transport times across some of the country’s most heavily trafficked regions including Los Angeles-San Diego, Miami-Orlando and Seattle-Portland. The longest distance proposal, Cheyenne-Houston, would run 1,152 miles across four states, reducing to 1 hour and 45 minutes a journey that currently takes 17 hours by car or truck.

    Hyperloop One's panel of experts in transportation, technology, economics and innovation are considering the following route proposals:

    The global challenge expert panel comprises: Peter Diamandis, Founder and Executive Chairman of the XPRIZE Foundation; Bassam Mansour, International Railway Industry Advisor and Director of Rail Systems at HSS Engineers; Clive Burrows, Group Engineering Director for FirstGroup; Ulla Tapaninen, Senior Specialist in Economic Development for the City of Helsinki and adjunct professor at University of Turku.

    Vision for America Conference
    In addition to new details on the U.S. routes, the D.C. event featured a roundtable of speakers discussing the future of transportation that included former Secretary of Transportation Anthony Foxx; KPMG Head of Infrastructure Andy Garbutt; MIT Professor Alan Berger; Vice President of Corporate Communications and Government Affairs of Amtrak, Caroline Decker; and former BNSF Railway Vice President of Network Strategy Dean Wise. Keynote remarks were delivered by Tyler Duvall, partner at McKinsey & Co. and former Assistant Secretary for Transportation Policy at the US Dept. of Transportation.

    “The U.S. is challenged to meet the growing demands on our transportation infrastructure, with congestion costing the economy more than $160 billion per year due to wasted time and fuel,” said Tyler Duvall, a partner at McKinsey & Company. “However, new technologies are poised to drive efficiency, increase capacity, and help spur social and economic growth. To seize this opportunity, the approach to infrastructure planning must keep pace by integrating new technologies and taking long-term views of what mobility will look like in the future.”

    For more information on Hyperloop One, please visit www.hyperloop-one.com.

    About Hyperloop One
    Hyperloop One is reinventing transportation by developing the world's first Hyperloop, an integrated structure to move passengers and cargo between two points immediately, safely, efficiently and sustainably. Our team has the world's leading experts in engineering, technology and transport project delivery, working in tandem with global partners and investors to make Hyperloop a reality, now. Headquartered in Los Angeles, the company is led by CEO Rob Lloyd and co-founded by Executive Chairman Shervin Pishevar and President of Engineering Josh Giegel. For more information, please visit Hyperloop One.

  10. #9

    Google's AI will take on the world's top Go player next month

    engadget.com

    At the Future of Go Summit between May 23rd to May 27th, Google and the China Go Association (with help from the Chinese government) will bring together AlphaGo and some of the world's best Go players and AI experts to "explore the mysteries" of the ancient board game.

    There will be a variety of games on offer including Pair Go, where Chinese professionals will face off against each other but alternate moves with an AlphaGo teammate. The Team Go match, on the other hand, will see AlphaGo battle a five-player team of Chinese pros in a bid to test "creativity and adaptability." Ke Jie vs AlphaGo will, of course, be the main focus. It'll be a best of three match that Deepmind hopes will push AlphaGo to its absolute limit.

    The event makes for an interesting spectacle, especially considering Ke once said he didn't want to sit down with AlphaGo because it would learn his playing style. However, when Deepmind convincingly beat Lee Sedol, the 9th dan professional quickly changed his tune.

    "Instead of diminishing the game, as some feared, artificial intelligence (A.I.) has actually made human players stronger and more creative," said Hassabis. "It's humbling to see how pros and amateurs alike, who have pored over every detail of AlphaGo's innovative game play, have actually learned new knowledge and strategies about perhaps the most studied and contemplated game in history."

  11. #10

    Mobile-phone signals bolster street-level rain forecasts

    nature.com

    Jeff Tollefson

    Nichole Sobecki/Panos
    People in the developing world could benefit from improved precipitation forecasts.

    Meteorologists have long struggled to forecast storms and flooding at the level of streets and neighborhoods, but they may soon make headway thanks to the spread of mobile-phone networks.
    This strategy relies on the physics of how water scatters and absorbs microwaves. In 2006, researchers demonstrated that they could estimate how much precipitation was falling in an area by comparing changes in the signal strength between communication towers1. Accessing the commercial signals of mobile-phone companies was a major stumbling block for researchers, however, and the field progressed slowly. That is changing now, enabling experiments across Europe and Africa.

    The technology now appears ready for primetime. It could lead to more precise flood warnings — and more accurate storm predictions if the new data are integrated into modern weather forecasting models. Proponents also hope to use this approach to expand modern weather services in developing countries.

    The newest entry into this field is ClimaCell, a start-up company in Boston, Massachusetts, that launched on 2 April. The 12-person company claims it can integrate data from microwave signals and other weather observations to create more accurate short-term forecasts. It says itcan provide high-resolution, street-level weather forecasts three hours ahead, and will aim to provide a six-hour forecast within six months. The company has yet to make information on its system public or publish it in peer-reviewed journals.

    ClimaCell will start in the United States and other developed countries, but plans to move into developing countries including India later this year. “The signals are everywhere, so basically we want to cover the world,” says Shimon Elkabetz, ClimaCell’s chief executive officer and co-founder.
    ClimaCell
    An example of a hyperlocal rain forecast from ClimaCell.

    Coming online

    But the fledgling company faces competition from researchers in Europe and Israel who have tested systems at multiple scales, including countries and cities, over the past several years.The scientists recently formed a consortium to advance the technology using open-source software. Coordinated by Aart Overeem, a hydrometeorologist at the Royal Netherlands Meteorological Institute in De Bilt, the group is seeking nearly €5 million (US$5.3 million) from the European Commission to create a prototype rainfall-monitoring system that could eventually be set up across Europe and Africa.

    “There is a lot of evidence that this technology works, but we still need to test it in more regions with large data sets and different networks,” Overeem says. Although ClimaCell has made bold claims about its programme, Overeem says he cannot properly review the company's technology without access to more data.

    “The fact that a start-up company and commercial investors are willing to put money into this technology is good news, but I believe there is room for all,” says Hagit Messer, an electrical engineer at Tel Aviv University, Israel, who led the 2006 study. She is part of the research consortium led by Overeem.

    Previous projects by members of the consortium that tested the technology have met with success. In 2012, for instance, Overeem and his colleagues showed that the technology could be applied at the country level using commercial microwave data in the Netherlands2. And in 2015, the Swedish Meteorological and Hydrological Institute (SMHI), headquartered in Norrköping, launched a prototype real-time ‘microweather’ project in Gothenburg. It collects around 6 million measurements in the city each day in partnership with the telecommunications company Ericsson and a cellular tower operator. The result is a minute-by-minute estimate of rainfall on a 500-metre-resolution map that encompasses the city.

    A brave new world

    Jafet Andersson, an SMHI hydrologist, says that the project has helped to advance the technology. For example, he notes that microwave data often overestimate rainfall by as much as 200-300%. Butthe team has worked out how to correct for that bias without relying on reference measurements from rain gauges or ground-based radar. This will make it easier to extend the technology to developing countries.

    “It will take some time, but we are in the process of industrializing it on a country scale, or even a global scale,” Andersson says.

    Researchers with the consortium have deployed the technique in African countries that do not have access to ground-based radar and extensive rain-gauge networks. A team led by Marielle Gosset, a hydrologist at the French Institute for Development Research in Toulouse, demonstrated a proof-of-concept system in Burkina Faso3 in 2012 and has since branched out to other countries including Niger. Working with French telecoms giant Orange, and with funding from the World Bank and the United Nations, her team hopes to expand into Morocco and begin using real-time microwave data in Cameroon this year.

    The technology is attracting interest in Africa because traditional weather monitoring systems such as radar are too expensive, Gosset says. Weather forecasts based on microwave signals give developing countries a similar system, but for less money, she says.

    Access to commercial data is getting easier too. Researchers say that telecommunication companies are beginning to see the value of releasing the data, and the consortium plans to create a central repository for processing the information. Project scientists hope to create a model that will enable a smooth partnership with the industry.

    “I think that this door is just about to open,” says Andersson.
    Pifanjr thanked this post.


     
Page 1 of 97 1 2 3 11 51 ... LastLast

Similar Threads

  1. [Generation Y] Are People born in 2000 or 2001 Generation Y or Generation Z?
    By decadeologist101 in forum Generation Y Forum
    Replies: 227
    Last Post: 09-21-2019, 10:51 PM
  2. [Generation X] Breakfast Club generation vs Big Chill generation
    By headnurse in forum Generation X Forum
    Replies: 24
    Last Post: 06-15-2017, 08:26 AM
  3. Replies: 0
    Last Post: 01-22-2014, 07:20 AM
  4. Ram Bahadur Bomjon
    By Sparky in forum Guess the type
    Replies: 6
    Last Post: 03-22-2013, 05:05 PM
  5. RAM and increasing its memory size ?
    By countrygirl90 in forum Science and Technology
    Replies: 25
    Last Post: 03-06-2013, 06:38 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
All times are GMT -7. The time now is 12:10 AM.
Information provided on the site is meant to complement and not replace any advice or information from a health professional.
© 2014 PersonalityCafe
 

SEO by vBSEO 3.6.0