interesting tech stories - Page 2

interesting tech stories

Hello Guest! Sign up to join the discussion below...
Page 2 of 97 FirstFirst 1 2 3 4 12 52 ... LastLast
Results 11 to 20 of 964
Thank Tree792Thanks

This is a discussion on interesting tech stories within the Science and Technology forums, part of the Topics of Interest category; There's a company that plans to launch the world's first single stage to orbit rocket in 2018. This is the ...

  1. #11

    There's a company that plans to launch the world's first single stage to orbit rocket in 2018. This is the first time I've heard of this despite the fact that I keep up with these types of developments in the aerospace industry pretty religiously. It might be vaporware, but I think it's about time aerospike engines got some attention.

    Snip:

    "...the ARCA Space Corporation has developed a concept for a single-stage-to-orbit (SSTO) rocket. It’s known as the Haas 2CA, the latest in a series of rockets being developed by the New Mexico-based aerospace company. If all goes as planned, this rocket will be the first SSTO rocket in history, meaning it will be able to place payloads and crew into Earth’s orbit relying on only one stage with one engine."

    https://www.universetoday.com/134812...-orbit-rocket/
    ae1905 thanked this post.

  2. #12

    New Material Sucks Drinking Water Out Of Thin Air

    blogs.discovermagazine.com
    Senior author Omar Yaghi demonstrates how the MOF works using a model.

    A thin lattice of metals and organic compounds could turn moisture trapped in the atmosphere into drinkable water using only the power of the sun.

    By optimizing what they call a metal-organic framework (MOF) to hang on to water molecules, researchers at MIT and the University of California-Berkeley have created a system that passively catches water vapor and releases it later when exposed to heat from sunlight. Their device could offer a low-cost, sustainable means to deliver drinkable water to arid regions of the world.

    Their MOF is a tangled lattice of zirconium and fumarate, an organic compound. Such frameworks are composed of a dense knot of threads perfect for holding on to molecules. By altering the composition of the framework, MOF’s can be optimized to grab different kinds of compounds — anything from hydrogen and methane to petrochemicals.

    There are thousands of such materials out there, and likely more still awaiting discovery. This particular porous framework was tuned to embrace water molecules, allowing it to sift H2O out of the air. Heating the material up with sunlight forces the water through a condenser, where it can be captured and put to use.

    In a paper published Thursday in Science, the researchers say that one kilogram of their material can produce 2.8 liters of water a day in conditions where the relative humidity is as low as 20 percent — about the same as most deserts. Although the material holds about 20 percent of its weight in water right now, they think they may be able to double its capacity with improved materials. The basic components are cheap and easy to acquire, they say, paving the way for large-scale production of the MOF.

    The key advantage of their material is the lack of any artificial power source. After all, we pull moisture out of the air with humidifiers, but they also suck up a good deal of electricity. Powering the device with sunlight bypasses that requirement, especially important in developing nations lacking infrastructure.

    Other devices that rely on a similar concept have been proposed, such as the Warka Water tower. Using designs borrowed from desert plants, the tower exploits temperature swings between day and night to capture water and funnel it to a tank. The MOF material operates in a similar way, but the porous mesh allows for more water to be held in a smaller area.

    To keep water flowing constantly, the researchers say that the system could soak up water overnight to disperse during the daytime, or it could be designed to force more air through, increasing the rate of water collection. The team is still tweaking both the design and the materials to coax more productivity out of the mesh.

  3. #13

    The tiny changes that can cause AI to fail

    bbc.com

    Aviva Hope Rutkin

    The year is 2022. You’re riding along in a self-driving car on a routine trip through the city. The car comes to a stop sign it’s passed a hundred times before – but this time, it blows right through it.

    To you, the stop sign looks exactly the same as any other. But to the car, it looks like something entirely different. Minutes earlier, unbeknownst to either you or the machine, a scam artist stuck a small sticker onto the sign: unnoticeable to the human eye, inescapable to the technology.

    In other words? The tiny sticker smacked on the sign is enough for the car to “see” the stop sign as something completely different from a stop sign.

    It may sound far-fetched. But a growing field of research proves that artificial intelligence can be fooled in more or less the same way, seeing one thing where humans would see something else entirely. As machine learning algorithms increasingly find their way into our roads, our finances, our healthcare system, computer scientists hope to learn more about how to defend them against these “adversarial” attacks – before someone tries to bamboozle them for real.

    “It’s something that’s a growing concern in the machine learning and AI community, especially because these algorithms are being used more and more,” says Daniel Lowd, assistant professor of computer and information science at the University of Oregon. “If spam gets through or a few emails get blocked, it’s not the end of the word. On the other hand, if you’re relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher.”

    Whether or not a smart machine malfunctions, or is hacked, hinges on the very different way that machine learning algorithms 'see' the world. In this way, to a machine, a panda could look like a gibbon, or a school bus could read as an ostrich.

    In one experiment, researchers from France and Switzerland showed how such perturbations could cause a computer to mistake a squirrel for an grey fox, or a coffee pot for a macaw.
    How can this be? Think of a child learning to recognise numbers. As they look at each one in turn, she starts to pick up on certain common characteristics: ones are tall and slender, sixes and nines contain one big loop while eights have two, and so on. Once they’ve seen enough examples, they can quickly recognise new digits as fours or eights or threes – even if, thanks to the font or the handwriting, it doesn’t look exactly like any other four or eight or three they’ve ever seen before.

    In this way, to a machine, a panda could look like a gibbon, or a school bus could read as an ostrich
    Machine learning algorithms learn to read the world through a somewhat similar process. Scientists will feed a computer with hundreds or thousands of (usually labelled) examples of whatever it is they’d like the computer to detect. As the machine sifts through the data – this is a number, this is not, this is a number, this is not – it starts to pick up on features that give the answer away. Soon, it’s able to look at a picture and declare, “This is a five!” with high accuracy.

    In this way, both human children and computers alike can learn to recognise a huge array of objects, from numbers to cats to boats to individual human faces.

    But, unlike a human child, the computer isn’t paying attention to high-level details like a cat’s furry ears or the number four’s distinctive angular shape. It’s not considering the whole picture.

    Instead, it’s likely looking at the individual pixels of the picture – and for the fastest way to tell objects apart. If the vast majority of number ones have a black pixel in one particular spot and a couple of white pixels in another particular spot, then the machine may make a call after only checking that handful of pixels.

    Now, think back to the stop sign again. With an imperceptible tweak to the pixels of the image – or what experts call “perturbations” – the computer is fooled into thinking that the stop sign is something it isn’t.

    If these vulnerabilities exist, someone will figure out how to exploit them. Someone likely already has
    Similar research from the Evolving Artificial Intelligence Laboratory at the University of Wyoming and Cornell University has produced a bounty of optical illusions for artificial intelligence. These psychedelic images of abstract patterns and colours look like nothing much to humans, but are rapidly recognised by the computer as snakes or rifles. These suggest how AI can look at something and be way off base as to what the object actually is or looks like.

    This weakness is common across all types of machine learning algorithms. “One would expect every algorithm has a chink the armour,” says Yevgeniy Vorobeychik, assistant professor of computer science and computer engineering at Vanderbilt University. “We live in a really complicated multi-dimensional world, and algorithms, by their nature, are only focused on a relatively small portion of it.”

    Voyobeychik is “very confident” that, if these vulnerabilities exist, someone will figure out how to exploit them. Someone likely already has.

    Consider spam filters, automated programmes that weed out any dodgy-looking emails. Spammers can try to scale over the wall by tweaking the spelling of words (Viagra to [email protected]) or by appending a list of “good words” typically found in legitimate emails: words like, according to one algorithm, “glad”, “me” or “yup”. Meanwhile, spammers could try to drown out words that often pop up in illegitimate emails, like “claim” or “mobile” or “won”.

    What might this allow scammers to one day pull off? That self-driving car hoodwinked by a stop sign sticker is a classic scenario that’s been floated by experts in the field. Adversarial data might help slip porn past safe-content filters. Others might try to boost the numbers on a cheque. Or hackers could tweak the code of malicious software just enough to slip undetected past digital security.

    Troublemakers can figure out how to create adversarial data if they have a copy of the machine learning algorithm they want to fool. But that’s not necessary for sneaking through the algorithm’s doors. They can simply brute-force their attack, throwing slightly different versions of an email or image or whatever it is against the wall until one gets through. Over time, this could even be used to generate a new model entirely, one that learns what the good guys are looking for and how to produce data that fools them.

    “People have been manipulating machine learning systems since they were first introduced,” says Patrick McDaniel, professor of computer science and engineering at Pennsylvania State University. “If people are using these techniques in the wild, we might not know it.”
    Scammers might not be the only ones to make hay while the sun shines. Adversarial approaches could come in handy for people hoping to avoid the X-ray eyes of modern technology.

    “If you’re some political dissident inside a repressive regime and you want to be able to conduct activities without being targeted, being able to avoid automated surveillance techniques based on machine learning would be a positive use,” says Lowd.

    In one project, published in October, researchers at Carnegie Mellon University built a pair of glasses that can subtly mislead a facial recognition system – making the computer confuse actress Reese Witherspoon for Russell Crowe. It sounds playful, but such technology could be handy for someone desperate to avoid censorship by those in power.

    McDaniel suggests we consider leaving humans in the loop when we can, providing some sort of external verification. In the meantime, what’s an algorithm to do? “The only way to completely avoid this is to have a perfect model that is right all the time,” says Lowd. Even if we could build artificial intelligence that bested humans, the world would still contain ambiguous cases where the right answer wasn’t readily apparent.

    Machine learning algorithms are usually scored by their accuracy. A programme that recognises chairs 99% of the time is obviously better than one that only hits the mark six times out of 10. But some experts now argue that they should also measure how well the algorithm can handle an attack: the tougher, the better.

    Another solution might be for experts to put the programmes through their paces. Create your own example attacks in the lab based on what you think perpetrators might do, then show them to the machine learning algorithm. This could help it become more resilient over time – provided, of course, that the test attacks match the type that will be tried in the real world.
    McDaniel suggests we consider leaving humans in the loop when we can, providing some sort of external verification that the algorithms’ guesses are correct. Some “intelligent assistants”, like Facebook’s M, have humans double-check and soup up their answers; others have suggested that human checks could be useful in sensitive applications such as court judgments.

    “Machine learning systems are a tool to do reasoning. We need to be smart and rational about what we give them and what they tell us,” he says. “We shouldn’t treat them as perfect oracles of truth.”
    Pifanjr and The red spirit thanked this post.

  4. Remove Advertisements
    PersonalityCafe.com
    Advertisements
     

  5. #14

    AI can predict heart attacks more accurately than doctors

    engadget.com
    The American College of Cardiology/American Heart Association (ACC/AHA) has developed a series of guidelines for estimating a patient's cardiovascular risk which is based on eight factors including age, cholesterol level and blood pressure. On average, this system correctly guesses a person's risk at a rate of 72.8 percent.

    That's pretty accurate but Stephen Weng and his team set about to make it better. They built four computer learning algorithms, then fed them data from 378,256 patients in the United Kingdom. The systems first used around 295,000 records to generate their internal predictive models. Then they used the remaining records to test and refine them. The algorithms results significantly outperformed the AAA/AHA guidelines, ranging from 74.5 to 76.4 percent accuracy. The neural network algorithm tested highest, beating the existing guidelines by 7.6 percent while raising 1.6 percent fewer false alarms.

    Out of the 83,000 patient set of test records, this system could have saved 355 extra lives. Interestingly, the AI systems identified a number of risk factors and predictors not covered in the existing guidelines, like severe mental illness and the consumption of oral corticosteroids. "There's a lot of interaction in biological systems," Weng told Science. "That's the reality of the human body. What computer science allows us to do is to explore those associations."
    Pifanjr and The red spirit thanked this post.

  6. #15

    88% Of Medical 'Second Opinions' Give A Different Diagnosis - And So Do Some AI

    science.slashdot.org

    First, "A new study finds that nearly 9 in 10 people who go for a second opinion after seeing a doctor are likely to leave with a refined or new diagnosis from what they were first told," according to an article shared by Slashdot reader schwit1:

    Researchers at the Mayo Clinic examined 286 patient records of individuals who had decided to consult a second opinion, hoping to determine whether being referred to a second specialist impacted one's likelihood of receiving an accurate diagnosis. The study, conducted using records of patients referred to the Mayo Clinic's General Internal Medicine Division over a two-year period, ultimately found that when consulting a second opinion, the physician only confirmed the original diagnosis 12 percent of the time. Among those with updated diagnoses, 66% received a refined or redefined diagnosis, while 21% were diagnosed with something completely different than what their first physician concluded. But in a related story, Slashdot reader sciencehabit writes that four machine-learning algorithms all performed better than currently-used algorithm of the American College of Cardiology, according to newly-published research, which concludes that "machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others."

    "I can't stress enough how important it is," one Stanford vascular surgeon told Science magazine, "and how much I really hope that doctors start to embrace the use of artificial intelligence to assist us in care of patients."
    Pifanjr and The red spirit thanked this post.

  7. #16

    Facebook reveals its camera-centric AR future

    engadget.com

    There are several elements to the Camera Effects Platform. One is Frame Studio, an app that lets anyone with a profile or page design frames for profile pictures and share them with friends. The more interesting tool is AR Studio, which is now in a closed beta on the Mac OS X platform. That app lets developers and artists build animated frames, masks and effects that can be used during Live broadcasts to add animated frame, masks and effects to video (below).

    Zuckerberg showed a few examples of what such apps will look like. There's a screen-based Facebook Stories app that overlays camera effects on your selfies, for one. Another is called 3D effects, a sort of portable CGI that lets you capture and interact with 3D scenes at a high level of precision, something akin to what Microsoft has done with Hololens. (By the way, Snapchat "coincidentally" launched a very similar app called 3D objects today, which must have annoyed the hell out of the Facebook team.)

    As an example of 3D effects, Zuck showed off an app that adds CGI to images, filling a room with skittles and liquids. Another involves image recognition -- if you touch a coffee mug, the camera can figure out what it is and add virtual steam, for instance. Or, if you tap a plant, you can add a shower of rain, or overlay a label, complete with a rating, onto a wine bottle.

    There are also typical filters (like those first pioneered by Snapchat, of course) that can slap a Nike headband on your selfie as part of Nike's Run Club app. Other examples include an AR art project that lets you digitally "paint" an entire room, and one that used a real table as an AR game platform. You can also leave digital "notes" on your real fridge or read a simulated menu at a restaurant.

    Of interest to gamers, Facebook showed off a Mass Effect demo (above) that builds on the social features it first revealed at CES 2017 earlier this year. It lets you overlay the look of the game onto real life, display 3D leaderboards and more. It also supports face tracking using both the front and back cameras, real-time 3D rendering and more. Another demo showed how you can overlay Manchester United-themed confetti onto a team win, or overlay Giphy GIFs onto live video.

    All of this sounds pretty nice, but it's also merely an augmentation of current AR apps like Pokémon Go or selfie effects from Facebook, Snapchat and others. Zuckerberg didn't make much mention of new-fangled AR platforms like Hololens-like glasses or contacts that can give you simulated information or effects on the go. To be fair, however, there aren't really any such products on the market yet.

    It's also worth noting that a lot of these apps are oriented towards businesses like Nike and Mass Effect developer BioWare. As such, they could turn the Facebook app into even more of a corporate ad space, much as it's doing with Messenger. So while the new AR stuff sounds cool, it's also a way to for Facebook to insinuate itself even more into our real-world lives, with the not-so-benevolent purpose of gaining even more ad revenue.

    Click here to catch up on the latest news from F8 2017!
    Pifanjr thanked this post.

  8. #17
  9. #18

    I know that this is mostly a thread for tech stories that are interesting because they are exciting, but I personally found this interesting:
    Amazon Go grocery store opening delayed due to technical issues - Business Insider

    The big Amazon Go announcement, the technology giant promising to make millions of people unemployed grocery shopping easier for all of us, has now canceled the public opening, calling it a delay but not willing to commit to any new release schedule.
    ae1905 thanked this post.

  10. #19

    engadget.com Microsoft has a plan to beat Chromebooks at their own game


    Assuming this chart is accurate, it gives us a good idea of what sort of hardware we'll be seeing from Windows 10 Cloud devices. The relatively modest specs include 4GB of RAM, a quad-core Celeron (or better) processor and either 32GB or 64GB of storage -- that all sounds a lot like you'll find in a Chromebook. Microsoft is looking to achieve "all-day" battery life for "most students" and super-short boot and wake from sleep times, as well.

    What we've seen from Windows 10 Cloud suggests that machines running this new software will only work with Universal Windows Platform apps you get from the Microsoft Store -- traditional Windows software will be out. But for a lot of students, that plus the many web-based apps and services out there will be enough to get a lot of work done. In any event, it looks like we'll know more in less than two weeks, and we'll be at Microsoft's event to cover all the news.



    engadget.com ARM-powered Windows 10 laptops will arrive this holiday

    Unlike Windows RT, ARM-based hardware running Windows 10 should support proper desktop-class apps. That's because Microsoft is building an emulator directly into the OS capable of handling Adobe Photoshop, Microsoft Word, and other desktop staples. That's a big promise, but one that could change the complexion of the Windows 10 market if successful. For one, it'll be the same experience that Windows customers are used to -- no strange amalgamation of apps and environments like the Surface 2. For another, ARM-based hardware should offer longer battery life, providing better options for people who need stamina more than power.

    It's also possible that ARM-powered laptops will be cheaper than their Intel-based contemporaries. That's not guaranteed, but if it does happen Microsoft will be better positioned to tackle Chromebooks and iPads in the classroom. According to The Verge, the first wave of ARM-based devices will come from other manufacturers, rather than Microsoft's Surface division. Lenovo is reportedly working on a device, and there's a good chance we'll see a few more at Microsoft's Build conference in Seattle.

    ^

    I expect to see many windows 2-in-1s, especially on the low end, using arm...I also expect microsoft to take another stab at windows phones, but with the kicker this time that the phones can be used in a windows laptop/desktop platform...having windows on arm also raises the possibility of designing android phones/tablets that can dual boot windows in desktop mode
    Last edited by ae1905; 04-21-2017 at 12:15 PM.

  11. #20

    Facebook details its plans for a brain-computer interface

    engadget.com


    In a video demo, Dugan showed the example of a woman in a Stanford lab who is able to type eight words per minute directly with her brain. This means, Dugan says, that you can text your friends without using your phone's keyboard. She goes on to say that in a few years time, the team expects to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. "That's five times faster than you can type on your smartphone, and it's straight from your brain," she said. "Your brain activity contains more information than what a word sounds like and how it's spelled; it also contains semantic information of what those words means."

    And that's not all. Dugan adds that it's also possible to "listen" to human speech by using your skin. It's like using Braille, but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband. The armband's system of actuators was tuned to 16 frequency bands, and has a tactile vocabulary of nine words, learned in about an hour.

    This, Dugan says, also has the potential of removing language barriers. "You could think in Mandarin, but feel in Spanish," she said. "We are wired to communicate and connect."
    Of course, a lot of this tech is still a few years out. And this is just a small sample of what Dugan has been working on since she joined Facebook in April 2016. She served as the 19th Director of the United States' Defense Advanced Research Projects Agency and she's the former head of Google's Advanced Technology and Projects group (no big deal).

    The stuff that Facebook is creating at Building 8 is modeled after DARPA, and would integrate both mind and body. "Our world is both digital and physical," she said, saying there's no need to put down your phone in order to communicate with the people in front of you. That is a false choice, she said, because you need both. "Our goal is to create and ship new, category-defining consumer products that are social first, at scale," she said. "We can honor the intimacy of what's timeless, and also create products that refuse to accept that false choice."

    In a Facebook post, CEO Mark Zuckerberg states: "Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world -- speech -- can only transmit about the same amount of data as a 1980s modem. We're working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no "brain click" would help make things like augmented reality feel much more natural."

    Click here to catch up on the latest news from F8 2017!


     
Page 2 of 97 FirstFirst 1 2 3 4 12 52 ... LastLast

Similar Threads

  1. [Generation Y] Are People born in 2000 or 2001 Generation Y or Generation Z?
    By decadeologist101 in forum Generation Y Forum
    Replies: 201
    Last Post: 09-11-2019, 02:26 PM
  2. [Generation X] Breakfast Club generation vs Big Chill generation
    By headnurse in forum Generation X Forum
    Replies: 24
    Last Post: 06-15-2017, 08:26 AM
  3. Replies: 0
    Last Post: 01-22-2014, 07:20 AM
  4. Ram Bahadur Bomjon
    By Sparky in forum Guess the type
    Replies: 6
    Last Post: 03-22-2013, 05:05 PM
  5. RAM and increasing its memory size ?
    By countrygirl90 in forum Science and Technology
    Replies: 25
    Last Post: 03-06-2013, 06:38 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
All times are GMT -7. The time now is 06:35 AM.
Information provided on the site is meant to complement and not replace any advice or information from a health professional.
© 2014 PersonalityCafe
 

SEO by vBSEO 3.6.0