Personality Cafe banner

1 - 20 of 612 Posts

·
Banned
Joined
·
21,040 Posts
Discussion Starter #2
Scientists turn mammalian cells into complex biocomputers

sciencemag.org

By Robert F. ServiceMar. 27, 2017 , 11:00 AM

Adding genetic circuits to cells lets researchers control their actions, setting the stage for new ways to treat cancer and other diseases.
ktsimage/iStockphoto

Computer hardware is getting a softer side. A research team has come up with a way of genetically engineering the DNA of mammalian cells to carry out complex computations, in effect turning the cells into biocomputers. The group hasn’t put those modified cells to work in useful ways yet, but down the road researchers hope the new programming techniques will help improve everything from cancer therapy to on-demand tissues that can replace worn-out body parts.

Engineering cells to function like minicomputers isn’t new. As part of the growing field of synthetic biology, research teams around the globe have been manipulating DNA for years to make cells perform simple actions like lighting up when oxygen levels drop. To date, most such experiments have been done in Escherichia coli and other bacteria, because their genes are relatively easy to manipulate. Researchers have also managed to link multiple genetic circuits together within a single cell to carry out more complex calculations in bacteria.

Scientists have tried to extend this to mammalian cells to create genetic circuitry that can help detect and treat human diseases. But efforts to construct large-scale genetic circuits in mammalian cells have largely failed: For complex circuits to work, the individual components—the turning on and off of different genes—must happen consistently. The most common way to turn a gene on or off is by using proteins called transcription factors that bind to and regulate the expression of a specific gene. The problem is these transcription factors “all behave slightly differently,” says Wilson Wong, a synthetic biologist at Boston University.

To upgrade their DNA “switches,” Wong and his colleagues steered clear of transcription factors and instead switched human kidney cell genes on and off using scissorlike enzymes that selectively cut out snippets of DNA. These enzymes, known as DNA recombinases, recognize two target stretches of DNA, each between 30 to 50 or more base pairs long. When a recombinase finds its target DNA stretches, it cuts out any DNA in between, and stitches the severed ends of the double helix back together.

To design genetic circuits, Wong and his colleagues use the conventional cellular machinery that reads out a cell’s DNA, transcribes its genes into RNA, and then translates the RNA into proteins. This normal gene-to-protein operation is initiated by another DNA snippet, a promoter, that sits just upstream of a gene. When a promoter is activated, a molecule called RNA polymerase gets to work, marching down the DNA strand and producing an RNA until it reaches another DNA snippet—a termination sequence—that tells it to stop.

To make one of their simplest circuits, Wong’s team inserted four extra snippets of DNA after a promoter. The main one produced green fluorescent protein (GFP), which lights up cells when it is produced. But in front of it was a termination sequence, flanked by two snippets that signaled the DNA recombinase. Wong and his team then inserted another gene in the same cell that made a modified recombinase, activated only when bound to a specific drug; without it, the recombinase wouldn’t cut the DNA.

When the promoter upstream of the GFP gene was activated, the RNA polymerase ran headfirst into the termination sequence, stopped reading the DNA, and didn’t produce the fluorescent protein. But when the drug was added, the recombinase switched on and spliced out the termination sequence that was preventing the RNA polymerase from initiating production of GFP. Voila, the cell lit up.

As if that Rube Goldbergian feat weren’t enough, Wong and his colleagues also showed that by adding additional recombinases together with different target strands, they could build a wide variety of circuits, each designed to carry out a different logical operation. The approach worked so well that the team built 113 different circuits, with a 96.5% success rate, they report today in Nature Biotechnology. As a further demonstration, they engineered human cells to produce a biological version of something called a Boolean logic lookup table. The circuit in this case has six different inputs, which can combine in different ways to execute one of 16 different logical operations.

“It’s exciting in that it represents another scale at which we can design mammalian genetic circuits,” says Timothy Lu, a synthetic biologist at the Massachusetts Institute of Technology in Cambridge. Although the current circuits are a proof of concept, both Lu and Wong say synthetic biologists want to use them to create new medical therapies. For example, scientists could engineer T cells, sentinels of the immune system, with genetic circuits that initiate a response to wipe out tumors when they detect the presence of two or three “biomarkers” produced by cancer cells, Lu says. Another example being explored by Wong and others is to engineer stem cells so they develop into specific cell types when prompted by different signals. This could let synthetic biologists generate tissues on demand, such as insulin-producing β cells, or cartilage-producing chondrocytes.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #3
Your Cat Thinks You're Coolhttps://www.scientificamerican.com/podcast/episode/your-cat-thinks-youre-cool/
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #4
Red Planet versus Dead Planet: Scientists Debate Next Destination for Astronauts in S

scientificamerican.com

Leonard David

THE WOODLANDS, Texas—Should the U.S. send humans back to the moon in a 21st-century reboot of the cold war–era Apollo program…or should the nation go full-throttle and for the gusto, sending crews to all the way to Mars, where none have gone before? U.S. scientists and policy makers have grappled ad nauseamwith America’s next great otherworldly destination for decades, without making much meaningful progress. Now that it is approaching a half-century since an American—or anyone at all, for that matter—last left low Earth orbit, the debate seems lost in space.

Soon that shall change, many advocates of human spaceflight believe, through a hybrid of new initiatives by Pres. Donald Trump’s administration as well as commercial efforts led by private industry. The Trump White House’s vision for U.S. astronauts remains at present a foggy TBD, but there are plans afoot to relaunch a National Space Council. Helmed by Vice Pres. Mike Pence, the council would set a new space agenda not only for NASA but also for U.S. rocket companies, big and small, such as SpaceX, Blue Origin, Boeing, Lockheed Martin and Orbital ATK.

In the meantime, speculation about the U.S.'s future in space has reached its highest point in recent memory, as made clear here last week by the proceedings of the 48th Lunar and Planetary Science Conference (LPSC). At the meeting, scientists unleashed the latest findings regarding Earth’s moon, Mars, asteroids, comets and myriad other cosmic objects of interest, often with a hopeful eye toward rekindling human voyages to other worlds. Although robotic probes are the persistent currency of discovery in today’s planetary science, many researchers increasingly see astronauts as crucial agents of exploration in the not-too-distant future.

Destination Moon

“Planetary science will completely change once we get crew beyond low Earth orbit,” says David Kring, a senior staff scientist at the Lunar and Planetary Institute. “The best way to explore the moon is by the well-trained astronaut, hands down. Apollo demonstrated that wonderfully.”

Kring says he is eager to see the first NASA exploration missions using the agency’s Space Launch System (SLS) rocket, which is currently being developed along with a crewed Orion spacecraft. At the Trump administration’s insistence, NASA is assessing the prospect of flying a two-person crew around the moon in mid-2019—years ahead of schedule for the delay-plagued SLS and Orion programs. “I’m even more anxious to see crews deploy robotic assets to the lunar surface and eventually land there themselves,” Kring adds. “We need to get back on the surface. We need to collect samples. And we need to bring them back to Earth.”

A Scientific Bonanza

The moon is a bonanza for scientists, Kring says, because it offers crucial insights for understanding the origins and evolution of Earth and other planets: how they formed from the accretion and differentiation of smaller bodies; how they were bombarded by impacts early in their histories; and even how some of them migrated in their orbits around the sun. “The best place to answer those questions is on the moon,” he explains, given that its airless surface contains the scarcely altered imprints of 4.5 billion years of solar system history.
You can’t be a Martian without being a lunatic, suggests Clive Neal, a lunar scientist at the University of Notre Dame. Credit: Barbara David Here on Earth destructive geologic processes cloud our view of those long-gone formative eons, Kring says. Even on modern-day Mars, a planet far more inert than Earth, many of the answers we might seek to our solar system’s deepest mysteries have been erased by the slow workings of geology.

Kring also sees the moon as a gateway to Mars. “We have to have legitimate, meaningful milestones on our way to Mars,” he explains. “We all want to get humans on Mars. The question is how do you get there? I don’t think we’re going to develop the right workforce with the capabilities to magically get to Mars by 2035 or 2045. We need to develop the techniques and the workforce for that leap, and that can happen in [lunar orbit] and on the moon”

Every Martian Is a Lunatic

According to Clive Neal, a lunar scientist at the University of Notre Dame, any moon-versus-Mars argument is a nonstarter. “It’s not either-or,” he says, because the moon can enable Mars by tapping lunar resources to support a sustainable human expansion deeper into the solar system.

“You can’t be a Martian without being a lunatic,” Neal says. “If you want to do ‘flags and footprints,’ go to Mars now. But you’ll never go back, because that’s Apollo—a fantastic program, but it was not sustainable.”

To Neal, Earth's satellite is first and foremost a world rich in resources that can and should be used. For example, he pointed to sun-shy craters at the lunar poles, where near-constant darkness has trapped and preserved water ice ripe for conversion into oxygen, water and rocket propellant. “We have to do some basic geologic prospecting,” he says. "And if the moon’s resources are shown to be substantial, “you then bring the Moon into our economic sphere of influence. I view the moon as enabling, and that comes through its resources.”

Apollo Dreams

Speaking at a breakout session prior to the formal start of the LPSC gathering last week, Apollo 17 moon walker and geologist Jack Schmitt reflected on the value of human exploration of the moon. It had been nearly 45 years since Schmitt bunny-hopped his way across the low-gravity lunar landscape in December 1972 during the final Apollo mission; half of Apollo’s 12 moon walkers have now died. With the passing of his Apollo 17 crewmate, Gene Cernan, earlier this year, Schmitt spoke as the last living person from that mission to have set foot on the moon.

Schmitt’s speech raised issues familiar to many in the audience. For decades, he has championed the potential economics of lunar mining for helium 3, an isotope that could be crucial for certain forms of nuclear fusion. The lunar surface has soaked up vast quantities of helium 3 from billions of years of bombardment by the solar wind, Schmitt explained, and drawing on that resource is how a lunar settlement could support itself. Provided, that is, that scientists back on Earth can first figure out how to make nuclear fusion an economically viable power source—a goal that has eluded them for decades.
Schmitt’s faith in a lunar future for humankind is unwavering. “A settlement on the Moon based on helium 3 export to Earth for fusion power makes a lot of sense to me. It starts not only to make us a two-planet species but enables, I think, Mars exploration in many different ways,” he noted.
Apollo 17 moon walker and geologist Jack Schmitt champions the possible economics of mining helium 3 on the moon. Credit: Barbara David For example, he said, helium 3 mining would produce by-products including water, hydrogen, carbon and nitrogen. These useful substances exist in only the most minuscule traces in lunar soil—but such an enormous amount of surface material would have to be processed to harvest helium 3 that they would accumulate in significant amounts. Water sourced from the low-gravity moon, Schmitt explained, could be utilized as a protective, radiation-thwarting cocoon, built into the superstructures of Mars-bound crewed spacecraft. “A few inches of water around a spacecraft weights an awful lot and it is expensive bringing it from Earth. You can produce water anywhere on the moon,” he said.

Red Planet Runs

Others have little time for the moon and the decades that would be required to develop infrastructure there. Their eyes are instead on the bigger prize: Mars. Elon Musk, SpaceX’s CEO and chief rocketeer, is the foremost example of the “Mars first” contingent. And according to SpaceX engineer Paul Wooster, Red Planet planning by Musk’s private company is steadily progressing. “The vision for SpaceX, long-term, is making it possible for large numbers of people to go to Mars,” he says.

SpaceX plans to build a mega-rocket and a giant interplanetary crew transporter to populate Martian outposts and eventually a full-size city, Wooster reports. But before the company can achieve those wild goals it must first firm up its capability to send something—anything at all—to Mars. That would come via interplanetary flights of the firm’s Red Dragon spacecraft—a derivative of the SpaceX Dragon capsule that has already hauled cargo to the International Space Station and in due course will take astronauts there. Wooster says SpaceX is intent on rapidly building up surface infrastructure on Mars, hopefully beginning by the mid-2020s. “We obviously have a lot of work ahead of us,” he says.

A crucial component of the SpaceX plan for Mars has been demonstrated several times here on Earth. The company has repeatedly landed its Falcon 9 rocket’s first stage at sea on a drone ship, and on land at Florida’s Cape Canaveral Air Force Station. Because the first stage can then be repeatedly reused rather than flown once and discarded, an economy of scale could develop that greatly reduces the cost of access to space, and thus the price tag of a bank-busting plan to colonize Mars. For SpaceX’s ambitious plans to work, the company will have to develop and demonstrate reusability on its next generation of rockets poised to debut after Falcon 9.

Wooster says an unpiloted SpaceX Red Dragon flight to Mars, able to deliver roughly one ton of useful payload, is being considered for 2020. Other Red Dragons could follow every two years or so, when Mars and Earth are in favorable alignments that minimize the fuel needed for an interplanetary crossing.
SpaceX Red Dragon nears autopilot touchdown on Mars. The private firm has the Red Planet in its sights to establish an outpost, and eventually a city, on that distant world.

Credit: SpaceX SpaceX Marks the Spot

“First and foremost is to learn how to land large payloads on Mars,” Wooster says. In preparation for planting an outpost on that far-off world, experiments onboard Red Dragon are set to test on-the-spot propellant production. That can be done, he says, by processing water from Mars’ surface and with gases extracted from the carbon dioxide–rich atmosphere. In fact, NASA is also set to try out something similar—a Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) on the space agency’s Mars 2020 rover.

SpaceX has been quietly working with NASA and non-NASA landing site specialists to plot locales for plopping down its spacecraft. Site selection is driven by the quantity of water the firm is looking for—thousands of tons, Wooster explains. One such spot “is looking quite promising,” he says: Arcadia Planitia, a smooth plain on Mars that appears to have large quantities of ice near the surface.
Of course, where there is ice there may well be subsurface pockets of liquid water—and potentially life, raising the possibility that SpaceX could violate “planetary protection” protocols by landing in such regions. Wooster says SpaceX is working with NASA’s Office of Planetary Protection to properly address such concerns. For now, he reiterates that the company is most definitely open for business and eager to entice researchers to make use of Red Dragon for toting their experiments. “SpaceX is a transportation company,” Wooster explains. “We’re very happy to deliver payloads to Mars for various people,” he adds, an offer that has also piqued NASA’s interest in contracting with the company to launch a potential science experiment in 2020. “We want to turn this into a steady cadence where we are sending [Red] Dragons to Mars based on every opportunity as we go forward, and eventually shifting over to our large Mars vehicle to deliver very large payloads,” he concludes.

Be it back to the moon or first footfalls on Mars, the trajectory taken by the U.S. appears to have the research community in ready-and-waiting mode. Whether it comes via government-backed public-private partnerships, international collaboration or go-it-alone endeavors by the only nation to have landed astronauts on the moon, there is plenty of extraterrestrial science that can—and will—be done, potentially even by humans.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #6
Surprising study finds that cats actually prefer people over food. - Seriously, Scien

blogs.discovermagazine.com

If you’ve ever had a cat, you probably believe that, given the choice, your cat would always choose food over you. But assumptions are not always correct, which is why we test them with science! Here, scientists tested whether pet and shelter cats prefer social interaction, food, scent, or toys. They found that “although there was clear individual variability in cat preference, social interaction with humans was the most-preferred stimulus category for the majority of cats, followed by food.” Now, doesn’t that make you feel special?

Social interaction, food, scent or toys? A formal assessment of domestic pet and shelter cat (Felis silvestris catus) preferences

“Domestic cats (Felis silvestris catus) engage in a variety of relationships with humans and can be conditioned to engage in numerous behaviors using Pavlovian and operant methods. Increasingly cat cognition research is providing evidence of their complex socio-cognitive and problem solving abilities. Nonetheless, it is still common belief that cats are not especially sociable or trainable. This disconnect may be due, in part, to a lack of knowledge of what stimuli cats prefer, and thus may be most motivated to work for. The current study investigated domestic cat preferences at the individual and population level using a free operant preference assessment. Adult cats from two populations (pet and shelter) were presented with three stimuli within each of the following four categories: human social interaction, food, toy, and scent. Proportion of time interacting with each stimulus was recorded. The single most-preferred stimulus from each of the four categories were simultaneously presented in a final session to determine each cat’s most-preferred stimulus overall. Although there was clear individual variability in cat preference, social interaction with humans was the most-preferred stimulus category for the majority of cats, followed by food. This was true for cats in both the pet and shelter population. Future research can examine the use of preferred stimuli as enrichment in applied settings and assess individual cats’ motivation to work for their most-preferred stimulus as a measure of reinforcer efficacy.”
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #7
Climate Change Makes Farmers Chase New Planting Windows

blogs.discovermagazine.com

A farmer climbs into a combine. (Credit: USDA/Lance Cheung)

Most people think of frost as a farmer’s worst nightmare. But for corn growers in Illinois, there’s little worse than a warm, soggy spring. Rainfall can soak soft prairie soils and rot the kernels before they can grow. If the rains keep farmers from their fields long enough, crop yields start to plummet. Rain can also wash away herbicides, pushing growers to apply more.

For years, this fear has driven farmers to plant earlier and earlier. Late April used to be the prime planting window. This year, weather permitting, many will begin planting this week.

Emerson Nafziger, University of Illinois extension specialist, says each year he hears stories of people planting earlier than the last. Some of those are just tales for the coffee shop, he says. This year he heard rumors of people planting in February. But he’s seen the trend himself over recent decades. Though he points out that seed treatments and high-tech farm equipment are as responsible for jumping the gun as the weather.

“Forty years ago a farmer with good conditions the first week of April almost certainly would not have planted,” he says. “It was seen as too risky. Today that’s not the case.”

These trends, along with a string of wet springs late in the last decade, prompted U.S. Department of Agriculture scientist Adam Smith to investigate how planting windows might shift even more with climate change in the years to come.

He and his colleagues used the latest climate models to see what might happen in Illinois down the road. They found spring continues to get warmer and wetter. But summers also get hotter and drier. Both of those are bad for crop yield. If the plant overheats while it’s maturing, it makes less corn. It can also freeze in the ground.

Their models show two planting seasons emerge in the future. One happens in March, as warmer winters let farmers plant earlier and earlier. The other comes between May and June, after the soggiest weather but before the heat.

“The season fragments and we start to see an early-early season, so that March starts looking like a good target for planting in the future,” he says. “In the past, March has been the bleeding edge. Nobody in their right mind would have planted then. But we’ve already seen the trend for early planting.”

Timeliness has always been vital in farming, but soon many Midwest growers will have to decide between these two contrasting strategies. Do they plant early and risk the cold, or do they plant late and risk the heat?

“There’s a clock ticking as soon as it begins to warm up in the spring and the field is plantable,” Smith says.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #9
Simulation Suggests 68 Percent of the Universe May Not Actually Exist

science.slashdot.org

Posted by Slashdot

boley1 quotes a report from New Atlas:
According to the Lambda Cold Dark Matter (Lambda-CDM) model, which is the current accepted standard for how the universe began and evolved, the ordinary matter we encounter every day only makes up around five percent of the universe's density, with dark matter comprising 27 percent, and the remaining 68 percent made up of dark energy, a so-far theoretical force driving the expansion of the universe. A new study has questioned whether dark energy exists at all, citing computer simulations that found that by accounting for the changing structure of the cosmos, the gap in the theory, which dark energy was proposed to fill, vanishes. According to the new study from Eotvos Lorand University in Hungary and the University of Hawaii, the discrepancy that dark energy was "invented" to fill might have arisen from the parts of the theory that were glossed over for the sake of simplicity. The researchers set up a computer simulation of how the universe formed, based on its large-scale structure. That structure apparently takes the form of "foam," where galaxies are found on the thin walls of each bubble, but large pockets in the middle are mostly devoid of both normal and dark matter. The team simulated how gravity would affect matter in this structure and found that, rather than the universe expanding in a smooth, uniform manner, different parts of it would expand at different rates. Importantly, though, the overall average rate of expansion is still consistent with observations, and points to accelerated expansion. The end result is what the team calls the Avera model. If the research stands up to scrutiny, it could change the direction of the study of physics away from chasing the ghost of dark energy. "The theory of general relativity is fundamental in understanding the way the universe evolves," says Dr Laszlo Dobos, co-author of the new paper. "We do not question its validity; we question the validity of the approximate solutions. Our findings rely on a mathematical conjecture which permits the differential expansion of space, consistent with general relativity, and they show how the formation of complex structures of matter affects the expansion. These issues were previously swept under the rug but taking them into account can explain the acceleration without the need for dark energy." The study has been
published in the Monthly Notices of the Royal Astronomical Society. You can view an animation that compares the different models

.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #10
Octopuses Edit Their Genetic Code Like No Other Animal - D-brief

blogs.discovermagazine.com

(Credit: Wikimedia Commons)

New research into the cephalopod genome is undermining our assumptions about evolution, and the role that DNA mutations play in updating a species’ physiology.
Researchers from the Marine Biological Laboratory in Woods Hole and Tel Aviv University have been studying how cephalopods — squids, octopuses, cuttlefish and nautiluses — edit their genome, and found that instead of relying on DNA mutations to adapt, they have the ability to make changes to their RNA, the genetic “messengers” that carry out the instructions written by DNA. This means that their fundamental genetic code remains largely the same from generation to generation, while changes occur at the level of the individual and don’t carry over to their offspring.

Don’t Alter the Messenger

In humans, less than one percent of our RNA transcripts show signs of editing, and the same holds true across most other species. In our cells, DNA instructions get copied faithfully to RNA, who then carry out their missions as instructed. Changes, if they do occur, happen at the level of the species and take generations. Cephalopods, however, have figured out how to tinker with the process of transcribing DNA to RNA, editing their genetic messages to create changes on an individual level.

Looking at a previously published octopus genome to search for signs of editing, researchers report that the level of RNA editing is about an order of magnitude higher than in primates. This means that octopuses alter the messages written by their DNA, transforming the original code into custom commands. The result is the production of novel proteins and enzymes that could potentially grant them new abilities.

Back in 2015, some of the same researchers discovered that octopuses edit their RNA more often than other species. Now, they’ve gone a step further by searching through a whole octopus genome to find where and when these edits happen and how this could affect their evolutionary history. They published their findings Thursday in Cell.

Many of the RNA edits occur in cephalopod brains, say the researchers, such as one adaptation that allows their neurons to function in cold environments. Octopuses are infamously smart creatures, able to open jar lids and even escape their aquariums, and the researchers say that the ability to make changes to their RNA could play a role in their intelligence. Though no definitive evidence exists, the researchers say that the effects of such RNA editing are likely “profound and complex.”

Further shoring up their claim is the discovery that nautiluses, which don’t share octopuses’ smarts, don’t rely as heavily on RNA editing. If the researchers theory is correct, being able to alter RNA could be an important factor in the species’ IQ. They still don’t, however, know what causes some bits of RNA to change after transcription while others stay the same. It’s likely not anything conscious on the part of the cephalopods, and could simply be the hand of natural selection favoring beneficial alterations to RNA.

Evolutionary Trade-off

What cephalopods have done, essentially, is to trade long-term, DNA-driven evolution for more immediate and individual adaptability. The researchers found that their DNA showed much lower rates of mutation than in most creatures, something they say is necessary for this type of RNA editing.

The parts of their genome that code for RNA editing are large, making up anywhere from 23 to 41 percent of protein coding sequences, depending on the species. If any of these areas get altered, they won’t be able to change their RNA anymore. So, they’ve favored immutability in this part of the genome, vastly slowing down their rate of evolution. The upside, however, is that individual cephalopod bodies can undergo relatively sweeping changes.

The new insights into cephalopod evolution have also pushed back the timeline for cephalopods. Most estimates of when a species first appeared are based on “molecular clock” analyses, which take a known rate of genetic mutation and extrapolate backwards to find when they would have first appeared. If squids and octopuses were experiencing mutations at a much lower rate, it would greatly extend their plausible history.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #11
Are some wolves being ‘redomesticated’ into dogs?

sciencemag.orgBy Virginia MorellApr. 5, 2017 , 12:00 PM

Will trash-eating wolves turn into a new kind of dog?
KenCanning/iStockphoto

It happened thousands of years ago, and it may be happening again: Wolves in various parts of the world may have started on the path to becoming dogs. That’s the conclusion of a new study, which finds that the animals are increasingly dining on livestock and human garbage instead of their wild prey, inching closer and closer to the human world in some places. But given today’s industrialized societies, this closeness might also bring humans and wolves into more conflict, with disastrous consequences for both.

“It’s a thought-provoking study, and does a good job of laying out how diet has the potential to change a large predator,” says Lee Dugatkin, an evolutionary biologist at the University of Louisville in Kentucky, who wasn’t involved in the research.

To find out how gray wolves might be affected by eating more people food, Thomas Newsome, an evolutionary biologist at the Deakin University in Melbourne, Australia, and his colleagues examined studies of what’s happened to other large carnivores that live close to people. Asiatic lions in the Gir protected area of western India, for instance, primarily kill and eat livestock, and have grown so much less aggressive toward humans that tourists can visit them on foot. In Israel, red foxes live longer and use smaller home ranges when they rely on a diet of leftovers. In contrast, black bears in North America that dine on human garbage are more likely to die young—because people kill them.

Newsome’s 2014 study of a dingo population in Australia’s Tanami Desert showed that the wild dogs’ habit of dining almost exclusively on junk food at a waste management facility had made them fat and less aggressive. They were also more likely to mate with local dogs and had become “cheeky,” says Newsome, daring to run between his legs as he set out traps for them. Most intriguingly, the dumpster dingoes’ population formed a genetic cluster distinct from all other dingoes—indicating that they were becoming genetically isolated, a key step in forming a new species.

Is this happening to gray wolves? The conditions are ripe for it, says Newsome, noting that human foods already make up 32% of gray wolf diets around the world. The animals now mostly range across remote regions of Eurasia and North America, yet some are returning to developed areas. Wolves in Greece primarily consume pigs, goats, and sheep; those in Spain feed mainly on ponies and other livestock; and Iranian wolves rarely eat anything other than chickens, domestic goats, and garbage. “Based on what’s happened to these other carnivores [that eat human foods], we think these wolves will change,” Newsome says.

The wolves’ new diet could affect everything from the size of their packs to their social behaviors, the team reports today in Bioscience. Like the dingoes, these wolves will probably mate with more dogs and, in North America, with coyotes, the researchers say. Newsome expects that they will also begin to diverge genetically from prey-hunting wolves, just as the dumpster dingoes did. Because ancient wolves are believed to have evolved into dogs by eating food and garbage at human camps, we may also be seeing “the makings of a new dog” today, hypothesizes Newsome, who plans to begin testing his idea with wolves in Washington state.

Not everyone is convinced. “I doubt if we’re domesticating wolves that eat human-sourced food,” says Robert Wayne, an evolutionary biologist and expert on canine genetics at the University of California, Los Angeles. “That diet is more likely to get them killed.” Unlike the trash-picking dingoes, which reduced their territories, wolves still range so widely that garbage-eaters are less likely to become genetically isolated from the rest of their population, he says. Bobcats, coyotes, and other animals that are already well-integrated in our neighborhoods are more likely to become domesticated, he adds.

Wayne and Newsome agree that for all these species, the best outcome isn’t domestication, but restoration of their habitats and natural prey in places where they can avoid people, livestock, and trash. If humans can arrange that, we won’t have a new dog, Newsome says. But we’ll still have wolves.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #12
Neuroscientists identify brain circuit necessary for memory formation

news.mit.edu

When we visit a friend or go to the beach, our brain stores a short-term memory of the experience in a part of the brain called the hippocampus. Those memories are later “consolidated” — that is, transferred to another part of the brain for longer-term storage.

A new MIT study of the neural circuits that underlie this process reveals, for the first time, that memories are actually formed simultaneously in the hippocampus and the long-term storage location in the brain’s cortex. However, the long-term memories remain “silent” for about two weeks before reaching a mature state.

“This and other findings in this paper provide a comprehensive circuit mechanism for consolidation of memory,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, the director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory, and the study’s senior author.

The findings, which appear in Science on April 6, may force some revision of the dominant models of how memory consolidation occurs, the researchers say.

The paper’s lead authors are research scientist Takashi Kitamura, postdoc Sachie Ogawa, and graduate student Dheeraj Roy. Other authors are postdocs Teruhiro Okuyama and Mark Morrissey, technical associate Lillian Smith, and former postdoc Roger Redondo.

Long-term storage
Beginning in the 1950s, studies of the famous amnesiac patient Henry Molaison, then known only as Patient H.M., revealed that the hippocampus is essential for forming new long-term memories. Molaison, whose hippocampus was damaged during an operation meant to help control his epileptic seizures, was no longer able to store new memories after the operation. However, he could still access some memories that had been formed before the surgery.

This suggested that long-term episodic memories (memories of specific events) are stored outside the hippocampus. Scientists believe these memories are stored in the neocortex, the part of the brain also responsible for cognitive functions such as attention and planning.

Neuroscientists have developed two major models to describe how memories are transferred from short- to long-term memory. The earliest, known as the standard model, proposes that short-term memories are initially formed and stored in the hippocampus only, before being gradually transferred to long-term storage in the neocortex and disappearing from the hippocampus.

A more recent model, the multiple trace model, suggests that traces of episodic memories remain in the hippocampus. These traces may store details of the memory, while the more general outlines are stored in the neocortex.

Until recently, there has been no good way to test these theories. Most previous studies of memory were based on analyzing how damage to certain brain areas affects memories. However, in 2012, Tonegawa’s lab developed a way to label cells called engram cells, which contain specific memories. This allows the researchers to trace the circuits involved in memory storage and retrieval. They can also artificially reactivate memories by using optogenetics, a technique that allows them to turn target cells on or off using light.

In the new Science study, the researchers used this approach to label memory cells in mice during a fear-conditioning event — that is, a mild electric shock delivered when the mouse is in a particular chamber. Then, they could use light to artificially reactivate these memory cells at different times and see if that reactivation provoked a behavioral response from the mice (freezing in place). The researchers could also determine which memory cells were active when the mice were placed in the chamber where the fear conditioning occurred, prompting them to naturally recall the memory.

The researchers labeled memory cells in three parts of the brain: the hippocampus, the prefrontal cortex, and the basolateral amygdala, which stores memories’ emotional associations.

Just one day after the fear-conditioning event, the researchers found that memories of the event were being stored in engram cells in both the hippocampus and the prefrontal cortex. However, the engram cells in the prefrontal cortex were “silent” — they could stimulate freezing behavior when artificially activated by light, but they did not fire during natural memory recall.

“Already the prefrontal cortex contained the specific memory information,” Kitamura says. “This is contrary to the standard theory of memory consolidation, which says that you gradually transfer the memories. The memory is already there.”

Over the next two weeks, the silent memory cells in the prefrontal cortex gradually matured, as reflected by changes in their anatomy and physiological activity, until the cells became necessary for the animals to naturally recall the event. By the end of the same period, the hippocampal engram cells became silent and were no longer needed for natural recall. However, traces of the memory remained: Reactivating those cells with light still prompted the animals to freeze.

In the basolateral amygdala, once memories were formed, the engram cells remained unchanged throughout the course of the experiment. Those cells, which are necessary to evoke the emotions linked with particular memories, communicate with engram cells in both the hippocampus and the prefrontal cortex.

Theory revision
The findings suggest that traditional theories of consolidation may not be accurate, because memories are formed rapidly and simultaneously in the prefrontal cortex and the hippocampus on the day of training.

“They’re formed in parallel but then they go different ways from there. The prefrontal cortex becomes stronger and the hippocampus becomes weaker,” Morrissey says.

“This paper shows clearly that from the get-go, engrams are formed in the prefrontal cortex,” says Paul Frankland, a principal investigator in the Neurobiology Laboratory at the Hospital for Sick Children in Toronto, who was not involved in the study. “It challenges the notion that there’s a movement of the memory trace from the hippocampus to the cortex, and makes the point that these circuits are engaged together at the same time. As the memories age, there’s a shift in the balance of which circuit is engaged as a memory is recalled.”

Further studies are needed to determine whether memories fade completely from hippocampal cells or if some traces remain. Right now, the researchers can only monitor engram cells for about two weeks, but they are working on adapting their technology to work for a longer period.

Kitamura says he believes that some trace of memory may stay in the hippocampus indefinitely, storing details that are retrieved only occasionally. “To discriminate two similar episodes, this silent engram may reactivate and people can retrieve the detailed episodic memory, even at very remote time points,” he says.

The researchers also plan to further investigate how the prefrontal cortex engram maturation process occurs. This study already showed that communication between the prefrontal cortex and the hippocampus is critical, because blocking the circuit connecting those two regions prevented the cortical memory cells from maturing properly.

The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #13
The Arctic Ocean Is Becoming More Like the Atlantic Ocean

scientificamerican.com Brian Kahn,Climate Central

The Arctic is undergoing an astonishingly rapid transition as climate change overwhelms the region.

New research sheds light on the latest example of the changes afoot, showing that parts of the Arctic Ocean are becoming more like the Atlantic. Warm waters are streaming into the ocean north of Scandinavia and Russia, altering ocean productivity and chemistry. That’s making sea ice recede and kickstarting a feedback loop that could make summer ice a thing of the past.

“2015 was a really anomalous year when we had problems finding a suitable ice flow to launch our drifting buoys,”Igor Polyakov, an oceanographer at the University of Alaska who led the new study, said. “(There was) nothing like that in the past, and it became a motivation to our analysis: why was ice in 2015 so rotten? What drives this huge change?”

The findings, published in Science on Thursday, show that while warming air has a role to play, processes are playing out in the ocean itself that are fundamentally altering the region.

Those changes will have impacts on the people, plants and animals that call the Arctic home. They could also create more geopolitical tension as resources previously locked under ice become available and shipping lanes open up.

In the east Arctic Ocean, the shift is manifesting itself in changing the layers of the ocean. There’s a cap of cold, less salty water that covers the eastern portion of the Arctic Ocean. Underneath it sits a pool of warm, salty Atlantic water that until recently hasn’t been able to find a way to surface. That stratification of layers has kept ice relatively safe from its warm grip.

The ocean has become gradually less stratified since the 1970s. Using data from buoys and satellites, Polyakov and his colleagues have found a more marked shift over the past decade and a half. Since 2002, the difference in water temperatures between the layers has dropped by about 2°F.

In winter from 2013-2015, the cap separating the deep water and surface water disappeared completely in some locations, allowing the warm Atlantic waters to reach the surface and cut further into sea ice pack. At the same time, warm air has further reduced sea ice, which is allowing still more mixing of the ocean layers.

The result is a feedback loop that is essentially turning roughly a third of the eastern Arctic Ocean into something resembling the ice-free Atlantic Ocean.

“Rapid changes in the eastern Arctic Ocean, which allow more heat from the ocean interior to reach the bottom of sea ice, are making it more sensitive to climate changes,” Polyakov said. “This is a big step toward the Arctic with seasonal sea-ice cover.”

The changes are already apparent in the region, which has largely been ice-free during the summer since 2011. The sea ice winter maximum, which has set a record low for three years running, has been largely driven by a lack of ice in the eastern Arctic.

Polyakov said he’s seen the rapid changes in ice firsthand. When they first put buoys in the eastern Arctic in 2002, researchers had to reach the sites on heavy icebreakers.

“Now we can reach them using an ice class ship,” he said. Ice class ships are not necessarily as reinforced as icebreakers.

The sea ice changes are having profound impacts outside of researchers’ ability to access more remote sites. Other research published earlier this week in Science Advances shows that thinning sea ice is allowing phytoplankton to bloom across the region.

Phytoplankton are tiny plants, and like your average potted plant, they need sunlight to bloom. Sea ice has been thick enough to prevent that from happening until very recently. The new findings show that over the past decade, up to 30 percent of the Arctic has become primed for summer blooms.

“Both of our results show the Arctic becoming a very different place than it has been in the past,” Christopher Hovart, an oceanographer at Harvard who led the plankton study, said. “Water pathways are changing, the ecology is changing, all driven by the declining sea ice field.”

This article is reproduced with permission from Climate Central. The article was first published on April 6, 2017.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #14
The Science of Problem-Solving

blogs.scientificamerican.com

Ulrich Boser

For Gurpreet Dhaliwal, just about every decision is a potential opportunity for effective problem solving. What route should he take into the office? Should Dhaliwal write his research paper today or next week? "We all do problem solving all day long," Dhaliwal told me.

An emergency medicine physician, Dhaliwal is one of the leaders in a field known as clinical reasoning, a type of applied problem solving. In recent years, Dhaliwal has mapped out a better way to solve thorny issues, and he believes that his problem solving approach can be applied to just about any field from knitting to chemistry.

For most of us, problem solving is one of those everyday activities that we do without much thought. But it turns out that many common approaches like brainstorming don’t have much research behind them. In contrast, practices that might seem a little odd—like talking to yourself—can be pretty effective.

I came across the new research on problem solving as part of my reporting on a book on the science of learning, and it was mathematician George Polya who first established the field, detailing a four-step approach to cracking enduring riddles.
For Polya, the first phase of problem solving is “understanding.” In this phase, people should look to find the core idea behind a problem. “You have to understand the problem,” Polya argued. “What is the unknown? What are the data?”

The second phase is “devising a plan,” in which people map out how they’d address the problem. “Find the connection between the data and the unknown,” Polya counseled.
The third phase of problem solving is “carrying out the plan.” This is a matter of doing—and vetting: “Can you prove that it is correct?”

The final phase for Polya is “looking back.” Or learning from the solution: People should "consolidate their knowledge.”

While Dhaliwal broadly follows this four-step method, he stresses that procedures are not enough. While a focused method is helpful, thorny issues don’t always fit nicely into categories.

This idea is clear in medicine. After all, symptoms rarely match up perfectly with an illness. Dizziness can be the signal of something serious—or a symptom of a lack of sleep. “What is tricky is to figure out what’s signal and what’s noise,” Dhaliwal told me.
In this regard, Dhaliwal argues that what’s at the heart of effective problem solving is making a robust connection between the problem and the solution. "Problem solving is part craft and part science, " Dhaliwal says, a type of "matching exercise. "

To get a sense of Dhaliwal’s approach, I once watched him solve a perplexing case. It was at a medical conference, and Dhaliwal stood at a dais as a fellow doctor explained the case: Basically, a man came into ER one day—let’s call him Andreas—and he spat up blood, could not breath very well, and had a slight fever.

At the start of the process, Dhaliwal recommends developing a one-sentence description of the problem. "It’s like a good Google search,” he said. “You want a concise summary,” and in this case, it was: Sixty-eight-year-old man with hemoptysis, or coughing up blood.

Dhaliwal also makes a few early generalizations, and he thought that Andreas might have a lung infection or an autoimmune problem. There wasn’t enough data to offer any sort of reliable conclusion, though, and really Dhaliwal was just gathering information.
Then came an x-ray, an HIV test, and as each bit of evidence rolled in, Dhaliwal detailed various scenarios, assembling the data in different ways. “To diagnosis, sometimes we are trying to lump, and sometimes trying to split,” he said.

Dhaliwal’s eyes flashed, for instance, when it became apparent that Andreas had worked in a fertilizer factory. It meant that Andreas was exposed to noxious chemicals, and for a while, it seemed like a toxic substance was at the root of Andreas’s illness.

Dhaliwal had a few strong pieces of evidence that supported the theory including some odd-looking red blood cells. But Dhaliwal wasn't comfortable with the level of proof. “I'm like an attorney presenting in a court of law,” Dhaliwal told me. “I want evidence.”
As the case progressed, Dhaliwal came across a new detail, and there was a growth in the heart. This shifted the diagnosis, knocking out the toxic chemical angle because it doesn't spark tumors.

Eventually, Dhaliwal uncovered a robust pattern, diagnosing Andreas with a cardiac angiosarcoma, or heart cancer. The pattern best explained the problem. “Diagnosing often comes down the ability to pull things together,” he said.

Dhaliwal doesn’t always get the right answer. But at the same time, it was clear that a more focused approach to problem solving can make a clear difference. If we’re more aware of how we approach an issue, we are better able to resolve the issue.

This idea explains why people who talk to themselves are more effective at problem solving. Self-queries—like is there enough evidence?—help us think through an issue.
As for Dhaliwal, he had yet another problem to solve after his diagnosis of Andreas: Should he take an Uber to the airport? Or should he grab a cab? After a little thought, Dhaliwal decided on an Uber. It was likely to be cheaper and equally comfortable. In other words, it was the solution that best matched the problem.
 

·
Registered
Joined
·
5,906 Posts
Most interesting science story to me is that of the Demon Core. And criticality accidents in general. Wikipedia has a page on them. Crazy shit.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #17
Hydrothermal Vents on Enceladus Hint at Life Beyond Earth

blogs.discovermagazine.com

Enceladus (Credit: NASA/JPL-Caltech/Space Science Institute)

In 1977, a group of marine researchers discovered something they’d only before theorized: cracks in the ocean floor releasing heat, warming up (and often boiling) the ocean around it. They also found mollusks in them, and subsequent vents have yielded heat resistant microbes, giant tube worms, and more fantastic creatures living in what are essentially small, underwater volcanoes.

Now, NASA has announced that they have indirect evidence for hydrothermal vents beyond Earth. In its encounters with Saturn’s moon Enceladus, the Cassini craft found chemicals associated with these events. The results were published today in Science. It adds to the body of evidence that Enceladus could be ripe for life.

“Enceladus is too small to have retained the hydrogen from when it formed, so the hydrogen we see today is coming from inside Enceladus,” Linda Spilker, project scientist on the Cassini mission, said in a press conference.

Enceladus, which is a tiny moon, took Cassini researchers by surprise when they discovered what seemed to be geysers of water shooting from the south pole in 2005. Subsequent investigations built a picture of the origin: liquid water under the surface of Enceladus, which led to the idea of an entire subsurface ocean. The heating mechanism, to date, has not been discovered.
A high enhanced image of Saturn’s moon Enceladus, taken in 2005, backlit by the sun show the fountain-like sources of the fine spray of material that towers over the south polar region. (Credit: NASA/JPL/Space Science Institute)

Cassini’s Ion Neutral Mass Spectrometer made the observation of molecular hydrogen in the ejecta from these geysers. According to principal investigator Hunter Waite of the Southwest Research Institute and his co-investigators, the source almost certainly has to be hydrothermal vents at Enceladus’ sea floor. This means there’s plenty of geological activity, increasing the chances for life.
Indeed, researchers published a paper last year suggesting that hydrothermal vents were the source of life on Earth, where chemical reactions fed these early microbes. If that’s the case on Enceladus, the ocean may have microbial life at the very least.

“The hydrogen could be a potential source of chemical energy for any microbes living in Enceladus’ ocean,” Spilker says.

Of course, it may be years or even decades until we know for sure — in September, NASA will intentionally crash Cassini into Saturn to make sure it doesn’t crash land into Titan or Enceladus and accidentally contaminate either potentially habitable moon with Earth bacteria.
This post originally appeared on Astronomy.com.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #18
Magnetic Maps Behind one of Nature’s Craziest Migrations

blogs.discovermagazine.com

Young eels, or elvers, migrate from their ocean hatcheries to brackish waters where they mature. (Credit: Maryland Fisheries Resources Office, USFWS)

In the middle of the Atlantic Ocean, there’s an enormous patch of seaweed that’s perplexed sailors for centuries: the Sargasso Sea. This strange place is where American and European eels go to breed. Once born, the little eels — called elvers — have to venture toward land.

American eels live out their lives — which can be more than a decade — just off the eastern seaboard. Their cousins across the pond live everywhere from Scandinavia to North Africa. Then, at the ends of their lives, both species journey thousands of miles out to sea to lay their eggs.

It’s a truly remarkable journey. And scientists have tracked this migration for the first time ever in recent years. For a century, the eels’ path was a mystery. They left the East Coast and just magically appeared in the Sargasso Sea. To crack the case, researchers had to figure out how to attach pop-up satellite tags to the eels that wouldn’t kill them during the sometimes 1,500-mile swim. The researchers figured out that the eels use ocean currents to hitch a ride to their chosen coast.
After their birth in the Sargasso Sea, American and European Eels migrate toward land. (Credit: USFWS)

And in a paper published Thursday in Current Biology, scientists have announced another discovery into how they do it. By studying European eels, the researchers figured out these eels actually have a magnetic map. Rather then guiding the fierce little swimmers toward land, the “map sense” steers them toward the Gulf Stream, which offers an easier ride toward Europe.

“We were not surprised to find that eels have a magnetic map, but we were surprised to discover how well they can detect subtle differences in magnetic fields,” said University of North Carolina, Chapel Hill scientist Lewis Naisbett-Jones. “We were even more surprised when our ocean simulation models revealed that the little eels use their map not so much to locate Europe, but to target a big conveyor belt — the Gulf Stream — that will take them there. Presumably, a little bit of work (i.e., swimming) helps increase their chances of catching a mostly free ride to their destination.”

The team figured this out by dropping elvers into an experimental apparatus that produced magnetic fields mimicking those experienced along the animals’ migratory path. The scientists simply dropped the baby eels into their contraption and watched which way they swam. Then they used computer models of ocean currents to simulate how their results would play out in the real world. Elvers that swam even vaguely in the right direction would have a much higher chance of reaching the Gulf Stream, scientists found. This sixth magnetic sense put them in good company alongside sea turtles and salmon.

However, the elvers’ journey toward land is actually just the start. There’s a major obstacle once they get there. Elver fishing is booming. American anglers can get thousands of dollars per pound for the well-traveled and tiny eels. Once sold, the elvers are raised to adulthood and sold for sushi — presumably to customers totally unaware of the strange lives these eels have had.
 

·
Banned
Joined
·
21,040 Posts
Discussion Starter #19
Dolphin's-Eye Video Is Breathtaking, Barfy

blogs.discovermagazine.com

It’s surprisingly hard to stick a camera to a dolphin. Surprising, anyway, when you consider the other animals that have carried monitoring devices down into the ocean for human scientists: sharks, sea turtles, birds, manatees, even whales. When a group of researchers recently overcame the challenges and created a camera that dolphins can wear, they were inducted into a dizzying underwater world.

Scientists may attach instruments to marine animals to do environmental research, as with seals that gather climate data. Or they may be interested in the animals themselves—for example, the ideal fatness for elephant seals.

The information camera-carrying animals send back can help researchers better understand how these species behave and how to protect them.
But attaching a camera to an animal is easiest when you can capture the creature and hold it still. Alternately, if you’re studying a great big animal with a flattish back, you might be able to slap on your instrument as it swims past your boat. Small, fast-swimming dolphins are tougher. Before now, no one had ever gotten a video camera onto a small cetacean, writes University of Alaska Southeast biologist Heidi Pearson in Marine Biology.

Pearson and her coauthors tackled the problem with dusky dolphins. This species grows to a little less than six feet long. A population of dusky dolphins off the coast of New Zealand made good subjects because these animals are already used to visits from scientists and tourists. They’re also abundant—they swim in groups of up to 1,000 during the day.

The researchers developed a tool they call C-VISS, for “cetacean-borne video camera and integrated sensor system.” It includes a miniature video camera, a tool to monitor the animal’s depth, four suction cups, a flotation device so it pops back up to the surface when it comes loose, and transmitters so researchers can track it down again afterward.

Then they started trying to attach it. The researchers brought their boat alongside a group of swimming dolphins and used a specially built pole to hold out the C-VISS. The device was stuck to the end of the pole with Velcro; if a person got the device successfully suctioned onto a swimming animal’s back, the Velcro would tear away.

The method wasn’t surefire. The researchers did five trials, and in their most successful trial they only stuck the camera onto a dolphin 8 percent of the time. But that was enough to generate almost 9 hours of video footage.

You can watch one brief video here. It’s a pretty exciting ride. The nauseous up-and-down movement of the camera might be annoying—until you recall that you’re riding along with an animal doing, well, a dolphin kick. You can see some of the other dolphins in the group, and if you watch closely you’ll spot a baby dolphin swimming alongside its mother.

This type of camera could give all kinds of useful insights into dolphin life, the researchers say. In their footage, they saw both friendly flipper rubbing and more-than-friendly sexual behavior. (Allegedly, that’s in panel d of the figure above.) The camera also caught some of the animals that dolphins were preying on, as well as the plants in their habitats.
Crucially, the researchers also saw that the cameras didn’t change dolphins’ behavior. The animals tended to swim away right after getting slapped with a camera, but they didn’t act panicked. Afterward, they behaved the same as the rest of their group. This is important because scientists don’t want to endanger animals with their equipment—or get results that don’t match what a dolphin does in its everyday life.

Next, the scientists hope to improve their device by shrinking it further and adding more tools. And, presumably, by making something they can attach on the first or second try.
 
1 - 20 of 612 Posts
Top