Home Blog Page 4

Biologists Unveil the First Living Yeast Cells With Over 50% Synthetic DNA

0

Our ability to manipulate the genes of living organisms has expanded dramatically in recent years. Now, researchers are a step closer to building genomes from scratch after unveiling a strain of yeast with more than 50 percent synthetic DNA.

Since 2006, an international consortium of researchers called the Synthetic Yeast Genome Project has been attempting to rewrite the entire genome of brewer’s yeast. The organism is an attractive target because it’s a eukaryote like us, and it’s also widely used in the biotechnology industry to produce biofuels, pharmaceuticals, and other high-value chemicals.

While researchers have previously rewritten the genomes of viruses and bacteria, yeast is more challenging because its DNA is split across 16 chromosomes. To speed up progress, the research groups involved each focused on rewriting a different chromosome, before trying to combine them.

The team has now successfully synthesized new versions of all 16 chromosomes and created an entirely novel chromosome. In a series of papers in Cell and Cell Genomics, the team also reports the successful combination of seven of these synthetic chromosomes, plus a fragment of another, in a single cell. Altogether, they account for more than 50 percent of the cell’s DNA.

“Our motivation is to understand the first principles of genome fundamentals by building synthetic genomes,” co-author Patrick Yizhi Cai from the University of Manchester said in a press release. “The team has now re-written the operating system of the budding yeast, which opens up a new era of engineering biology—moving from tinkering a handful of genes to de novo design and construction of entire genomes.”

The synthetic chromosomes are notably different to those of normal yeast. The researchers removed considerable amounts of “junk DNA” that is repetitive and doesn’t code for specific proteins. In particular, they cut stretches of DNA known as transposons—that can naturally recombine in unpredictable ways—to improve the stability of the genome.

They also separated all genes coding for transfer RNA into a completely new 17th genome. These molecules carry amino acids to ribosomes, the cell’s protein factories. Cai told Science tRNA molecules are “DNA damage hotspots.” The group hopes that by separating them out and housing them in a so-called “tRNA neochromosome” will make it easier to keep them under control.

“The tRNA neochromosome is the world’s first completely de novo synthetic chromosome,” says Cai. “Nothing like this exists in nature.”

Another significant alteration could accelerate efforts to find useful new strains of yeast. The team incorporated a system called SCRaMbLE into the genome, making it possible to rapidly rearrange genes within chromosomes. This “inducible evolution system” allows cells to quickly cycle through potentially interesting new genomes.

“It’s kind of like shuffling a deck of cards,” coauthor Jef Boeke from New York University Langone Health told New Scientist. “The scramble system is essentially evolution on hyperspeed, but we can switch it on and off.”

To get several of the modified chromosomes into the same yeast cell, Boeke’s team ran a lengthy cross-breeding program, mating cells with different combinations of genomes. At each step there was an extensive “debugging” process, as synthetic chromosomes interacted in unpredictable ways.

Using this approach, the team incorporated six full chromosomes and part of another one into a cell that survived and grew. They then developed a method called chromosome substitution to transfer the largest yeast chromosome from a donor cell, bumping the total to seven and a half and increasing the total amount of synthetic DNA to over 50 percent.

Getting all 17 synthetic chromosomes into a single cell will require considerable extra work, but crossing the halfway point is a significant achievement. And if the team can create yeast with a fully synthetic genome, it will mark a step change in our ability to manipulate the code of life.

“I like to call this the end of the beginning, not the beginning of the end, because that’s when we’re really going to be able to start shuffling that deck and producing yeast that can do things that we’ve never seen before,” Boeke says in the press release.

Image Credit: Scanning electron micrograph of partially synthetic yeast / Cell/Zhao et al.

Spinal Implant Helps a Man With Severe Parkinson’s Walk With Ease Again

0

In his mid-30s, Marc Gauthier noticed a creeping stiffness in his muscles. His hand shook when trying to hold steady. He struggled to maintain his balance while walking.

Then he began to freeze in place. When strolling down narrow streets running errands, his muscles seemed to suddenly disconnect from his brain. He couldn’t take a next step.

Gauthier has Parkinson’s disease. The debilitating brain disorder gradually destroys a type of brain cell related to the planning of movement. Since the 1980s, scientists have explored multiple treatments: transplanting stem cells to replace dying brain cells, using medications to counteract symptoms, and deep brain stimulation—the use of an electrical brain implant to directly zap the brain regions that coordinate movement.

While beneficial to many people, such treatments didn’t completely help Gauthier. Even with a deep brain stimulation device and medications, he struggled to walk without freezing in place.

In 2021, he signed up for a highly experimental trial. He had a small implant inserted into his spinal cord to directly activate nerves connecting his spinal cord and leg muscles. While extensively tested in non-human primates with symptoms resembling Parkinson’s, the therapy had never been tried in humans before.

Once Gauthier adapted to the implant, he found he could stroll the banks of Lake Geneva in Switzerland without any aid after three decades living with the disease.

“I can now walk with much more confidence,” he said in a press conference. Two years after the implant, “I’m not even afraid of stairs anymore.”

Gauthier’s success suggests a new way of tackling movement disorders originating in the brain. The implant mimics the natural signal patterns the brain sends to muscles to control walking, overriding his faulty biological signals.

“There are no therapies to address the severe gait problems that occur at a later stage of Parkinson’s, so it’s impressive to see him walking,” said study author Dr. Jocelyne Bloch to Nature.

The work is an “impressive tour-de-force,” said Drs. Aviv Mizrahi-Kliger and Karunesh Ganguly at the University of California, San Francisco, who were not involved in the study.

An Old Conundrum

It’s easy to take our movement for granted. A skip across a puddle seems mundane. But for people with Parkinson’s disease, it’s a hefty challenge.

We don’t yet fully understand what triggers the disease. A number of genes have been implicated. What’s clear is that the disorder slowly robs a person of the ability to move their muscles as the associated neurons are damaged. These cells pump out dopamine—a brain chemical that’s often linked to unexpected “happy” signals, such as after a surprisingly good meal. However, dopamine has a second job: It’s a traffic controller for muscle movement.

These signals break down in Parkinson’s disease.

One way to treat Parkinson’s is to increase dopamine levels with a drug called levodopa. Deep brain stimulation is another. Here, researchers insert an electrical probe deep inside the brain (hence the name) to activate neural circuits that release dopamine. While effective, the procedure is damaging to surrounding brain tissue, often causing inflammation and scarring.

The new study avoided the brain entirely.

Finding Hot Spots

For over a decade, Dr. Grégoire Courtine at the Swiss Federal Institute of Technology in Lausanne, in collaboration with NeuroRestore, has tried to reconnect mind to muscle.

Courtine’s team previously engineered an implant that helped people with spinal cord injuries walk with minimal training. Our brain controls muscles through the spinal cord. If that highway is damaged, muscles no longer respond to neural commands. Building on their earlier work, the team sought to develop a similar implant for Parkinson’s.

But many mysteries remained: Where should the electrodes go? What level of electrical simulation is necessary to activate the neural circuits? And are muscles going to respond to those artificial signals?

Using data from monkeys and humans, both healthy and with Parkinson’s—or Parkinson’s-like symptoms in the case of the monkeys—the team trained a spinal implant to detect unusual gaits and movements common in the disease. This implant could then also stimulate the spinal cord to restore healthy walking patterns.

The implant is tiny yet powerful. Made of two eight-electrode arrays, it uses various electrical stimulation patterns to mimic the intricacies of a natural neural command. The implant is placed in the spinal cord around the small of the back, lower than prior devices. This lower placement better engages the back and leg muscles essential for maintaining balance and walking.

In monkeys chemically-induced to display Parkinson’s symptoms, the stimulator restored aspects of their gait and balance. The monkeys could move at speeds similar to healthy peers and had better posture than untreated ones. Importantly, the implant prevented them from falling when challenged with obstacles—a problem for advanced Parkinson’s patients.

One Step Forward

These encouraging results led Gauthier to volunteer as the first human participant in an ongoing trial to test spinal cord stimulation in people with Parkinson’s.

Gauthier was first diagnosed with the disorder at 36. Eight years later, he had deep brain stimulation electrodes implanted and was taking the medication levodopa. Even so, his motions were slow and rigid, and he often stumbled or fell.

To personalize his implant, the team captured hours of video of his walking patterns. They then built a model of his muscles, skeleton, and several joints, such as the hips and knees. Using the model, they trained software to compensate for any dysfunction—allowing it to decipher the user’s intent and translate it into electrical zaps in the spinal cord to support the movement.

With the spinal cord implant active, Gauthier’s gait was far closer to that of a healthy person than someone with Parkinson’s. Although most of his previous treatments could control symptoms, he could finally walk with ease along a beachside road.

These results, though promising, are from just one person. The team wants to expand the treatment to six more people with Parkinson’s next year. Also, for now, the spinal stimulator isn’t a replacement for deep brain stimulation. During the trial Gauthier had his deep brain implant on. However, adding spinal cord stimulation lowered his chances of freezing and falling.

Parkinson’s remains an enigma. The disease varies between people and changes over time. Some develop gait and motor symptoms early on; others show symptoms far later. “That’s why researchers need to test the implant in as many people as possible,” study author Eduardo Martín Moraud told El Pais.

Larger trials are necessary before bringing the treatment to the hundreds of thousands of people with Parkinson’s in the US. But for now, one man has regained his life. “My daily life has profoundly improved,” said Gauthier.

Image Credit: CHUV Weber Gilles

How the World’s Biggest Optical Telescope Could Crack Some of the Greatest Puzzles in Science

0

Astronomers get to ask some of the most fundamental questions there are, ranging from whether we’re alone in the cosmos to what the nature of the mysterious dark energy and dark matter making up most of the universe is.

Now, a large group of astronomers from all over the world is building the biggest optical telescope ever—the Extremely Large Telescope (ELT)—in Chile. Once construction is completed in 2028, it could provide answers that transform our knowledge of the universe.

With its 39-meter diameter primary mirror, the ELT will contain the largest, most perfect reflecting surface ever made. Its light-collecting power will exceed that of all other large telescopes combined, enabling it to detect objects millions of times fainter than the human eye can see.

There are several reasons why we need such a telescope. Its incredible sensitivity will let it image some of the first galaxies ever formed, with light that has traveled for 13 billion years to reach the telescope. Observations of such distant objects may allow us to refine our understanding of cosmology and the nature of dark matter and dark energy.

Alien Life

The ELT may also offer an answer to the most fundamental question of all: Are we alone in the universe? The ELT is expected to be the first telescope to track down Earth-like exoplanets—planets that orbit other stars but have a similar mass, orbit, and proximity to their host as Earth.

Occupying the so-called Goldilocks zone, these Earth-like planets will orbit their star at just the right distance for water to neither boil nor freeze—providing the conditions for life to exist.

Size comparison between the ELT and other telescope domes.
Size comparison between the ELT and other telescope domes. Image Credit: ESO/ Wikipedia, CC BY-SA

The ELT’s camera will have six times better resolution than that of the James Webb Space Telescope, allowing it to take the clearest images yet of exoplanets. But fascinating as these pictures will be, they will not tell the whole story.

To learn if life is likely to exist on an exoplanet, astronomers must complement imaging with spectroscopy. While images reveal shape, size, and structure, spectra tell us about the speed, temperature, and even the chemistry of astronomical objects.

The ELT will contain not one, but four spectrographs—instruments that disperse light into its constituent colors, much like the iconic prism on Pink Floyd’s The Dark Side of the Moon album cover.

Each about the size of a minibus, and carefully environmentally controlled for stability, these spectrographs underpin all of the ELT’s key science cases. For giant exoplanets, the Harmoni instrument will analyze light that has traveled through their atmospheres, looking for signs of water, oxygen, methane, carbon dioxide, and other gases that indicate the existence of life.

To detect much smaller Earth-like exoplanets, the more specialized Andes instrument will be needed. With a cost of around €35 million, Andes will be able to detect tiny changes in the wavelength of light.

From previous satellite missions, astronomers already have a good idea of where to look in the sky for exoplanets. Indeed, there have been several thousand confirmed or “candidate” exoplanets detected using the “transit method.” Here, a space telescope stares at a patch of sky containing thousands of stars and looks for tiny, periodic dips in their intensities, caused when an orbiting planet passes in front of its star.

But Andes will use a different method to hunt for other Earths. As an exoplanet orbits its host star, its gravity tugs on the star, making it wobble. This movement is incredibly small; Earth’s orbit causes the sun to oscillate at just 10 centimeters per second—the walking speed of a tortoise.

Just as the pitch of an ambulance siren rises and falls as it travels towards and away from us, the wavelength of light observed from a wobbling star increases and decreases as the planet traces out its orbit.

Remarkably, Andes will be able to detect this minuscule change in the light’s color. Starlight, while essentially continuous (“white”) from the ultraviolet to the infrared, contains bands where atoms in the outer region of the star absorb specific wavelengths as the light escapes, appearing dark in the spectra.

Tiny shifts in the positions of these features—around 1/10,000th of a pixel on the Andes sensor—may, over months and years, reveal the periodic wobbles. This could ultimately help us to find an Earth 2.0.

At Heriot-Watt University, my team is piloting the development of a laser system known as a frequency comb that will enable Andes to reach such exquisite precision. Like the millimeter ticks on a ruler, the laser will calibrate the Andes spectrograph by providing a spectrum of light structured as thousands of regularly spaced wavelengths.

A spectrograph image from the Southern African Large Telescope. The regularly spaced tick marks are from a laser frequency comb, underneath which are gas emission lines. Image Credit: Rudi Kuhn (SALT) / Derryck Reid (Heriot-Watt University)

This scale will remain constant over decades, mitigating the measurement errors that occur from environmental changes in temperature and pressure.

With the ELT’s construction cost coming in at €1.45 billion, some will question the value of the project. But astronomy has a significance that spans millennia and transcends cultures and national borders. It is only by looking far outside our solar system that we can gain a perspective beyond the here and now.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESO/L. Calçada / Wikipedia

Zombie Cells Have a Weakness. An Experimental Anti-Aging Therapy Exploits It.

0

Senescent cells are biochemical waste factories.

A new study suggests that a way to wipe them out is a medicine already approved for eye problems.

Dubbed “zombie cells,” senescent cells slowly accumulate with age or with cancer treatments. The cells lose their ability to perform normal functions. Instead, they leak a toxic chemical soup into their local environment, increasing inflammation and damaging healthy cells.

Over a decade of research has shown eliminating these cells with genetic engineering or drugs can slow down aging symptoms in mice. It’s no wonder investors have poured billions of dollars into these “senolytic” drugs.

There are already hints of early successes. In one early clinical trial, cleaning out zombie cells with a combination of drugs in humans with age-related lung problems was found to be safe. Another study helped middle-aged and older people maintain blood pressure while running up stairs. But battling senescent cells isn’t just about improving athletic abilities. Many more clinical trials are in the works, including strengthening bone integrity and combating Alzheimer’s.

But to Carlos Anerillas, Myriam Gorospe, and their team at the National Institutes of Health (NIH) in Baltimore, therapies have yet to hit zombie cells where it really hurts.

In a study in Nature Aging, the team pinpointed a weakness in these cells: They constantly release toxic chemicals, like a leaky nose during a cold. Called SASP, for senescence-associated secretory phenotype, this stew of inflammatory molecules contributes to aging.

Lucky for us, this constant release of chemicals comes at a price. Zombie cells use a “factory” inside the cell to package and ship their toxic payload to neighboring cells and nearby tissues. All cells have these factories. But the ones in zombie cells go into overdrive.

The new study nailed down a protein pair that’s essential to the zombie cells’ toxic spew and found an FDA-approved drug that inhibits the process. When given to 22-month-old mice—roughly the human equivalent of 70 years old—they had better kidney, liver, and lung function within just two months of treatment.

The work “stands out,” said Yahyah Aman, an editor at Nature Aging. It’s an “exciting target for new senolytic drug development,” added Ming Xu at UConn Health, who wasn’t involved in the study.

A Molecular Metropolitan

Each cell is a bustling city with multiple neighborhoods.

Some house our genetic archives. Others translate those DNA codes into proteins. There are also acid-filled dumpsters and molecular recycling bins to keep each cell clear of waste.

Then there’s the ER. No, not the emergency room, but a fluffy croissant-like structure. Called the endoplasmic reticulum, it’s Grand Central for new proteins. The ER packages proteins and delivers them to internal structures, the cell’s surface, or destinations outside the cell.

These “secretory” packages are powerful regulators that control local cellular functions. Normally, the ER helps cells coordinate their responses with neighboring tissues—say, allowing blood to clot after a scrape or stimulating immune responses to heal the damage.

Senescent cells hijack this process. Instead of productive signaling, they instead release a toxic soup of chemicals. These cells aren’t born harmful. Rather, they’re transformed by a lifetime of injury—damage to their DNA, for example. Faced with so much damage, normal cells would wither away, allowing healthy new cells to replace them in some tissues like the skin.

Zombie cells, in contrast, refuse to die. As long as the harm stays below a lethal level, the cells live on, expelling their deadly brew and harming others in the vicinity.

These traits make zombie cells a valuable target for anti-aging therapies. And there have been promising treatments. Most have relied on existing knowledge or ideas about how zombie cells work. Researchers then seek out chemicals in massive drug libraries that might disrupt their function. While useful, this strategy can miss treatment options.

The new study went rogue. Rather than starting out with hypotheses, they screened the whole human genome to find new vulnerabilities.

A Wild West

In their hunt, the team turned to CRISPR. Famously known as a gene editor, CRISPR is now often used to pinpoint genes and proteins that contribute to cellular functions. Here, the team disrupted every gene in the human genome to pinpoint those that eliminated zombie cells.

Their work paid off. The screen found a protein pair critical for senescent cell survival. The team next looked for an FDA-approved drug to disrupt the pair. They found what they were looking for in verteporfin, a drug approved to treat eye blood vessel disease.

In several zombie cell cultures with the protein pair, the drug drove senescent cells into apoptosis—that is, the “gentle falling of the leaves,” a sort of cell death does no harm to surrounding cells.

Digging deeper, the drug seemed to directly target the zombie cells’ endoplasmic reticulum—their shipping center. Cells treated with the drug couldn’t sustain the delicate multi-layered structure, and it subsequently shriveled into a shape like a wet, crumpled paper towel.

“A shrunken ER triggered a metabolic crisis” in zombie cells, explained Anerillas and Gorospe. It “culminated with their death.”

Ageless Mice

As a proof of concept, the team injected elderly mice—roughly the age of a 70-year-old human—with verteporfin once a month for two months.

In just a week, mice treated with verteporfin showed fewer molecular signs of senescence in their kidney, liver, and lungs. Their fur was more luxurious compared to control mice without the drug.

As we age, immune cells often enter the lungs and cause damage. Verteporfin nixed this infiltration and reduced lung scarring in mice—which is often linked to decreased breathing capacity. Similarly, according to blood tests, the drug also helped restore function in the mice’s kidneys and liver.

Decreased numbers of senescent cells dampened inflammatory signals, which could explain the rejuvenating effects, explained the team. Verteporfin also stopped a “guardian” protein that protects senescent cells from death, further triggering their demise.

Tapping into a zombie cell’s unique vulnerabilities is a new strategy in the development of senolytics. There’s far more to explore. The endoplasmic reticulum isn’t the only cell component in the biological waste factory. Other cellular components that generate senescent cell poisons could also be blocked and help remove the cells themselves.

It’s a promising alternative to existing methods for wiping out senescent cells. The strategy could “greatly expand the catalog of senolytic therapies,” the team wrote.

Image Credit: A HeLA cell undergoing apoptosis. Tom Deerinck / NIH / FLICKR

A Revolution in Computer Graphics Is Bringing 3D Reality Capture to the Masses

0

As a weapon of war, destroying cultural heritage sites is a common method by armed invaders to deprive a community of their distinct identity. It was no surprise then, in February of 2022, as Russian troops swept into Ukraine, that historians and cultural heritage specialists braced for the coming destruction. So far in the Russia-Ukraine War, UNESCO has confirmed damage to hundreds of religious and historical buildings and dozens of public monuments, libraries, and museums.

While new technologies like low-cost drones, 3D printing, and private satellite internet may be creating a distinctly 21st century battlefield unfamiliar to conventional armies, another set of technologies is creating new possibilities for citizen archivists off the frontlines to preserve Ukrainian heritage sites.

Backup Ukraine, a collaborative project between the Danish UNESCO National Commission and Polycam, a 3D creation tool, enables anyone equipped with only a phone to scan and capture high-quality, detailed, and photorealistic 3D models of heritage sites, something only possible with expensive and burdensome equipment just a few years ago.

Backup Ukraine is a notable expression of the stunning speed with which 3D capture and graphics technologies are progressing, according to Bilawal Sidhu, a technologist, angel investor, and former Google product manager who worked on 3D maps and AR/VR.

“Reality capture technologies are on a staggering exponential curve of democratization,” he explained to me in an interview for Singularity Hub.

According to Sidhu, generating 3D assets had been possible, but only with expensive tools like DSLR cameras, lidar scanners, and pricey software licenses. As an example, he cited the work of CyArk, a non-profit founded two decades ago with the aim of using professional grade 3D capture technology to preserve cultural heritage around the world.

“What is insane, and what has changed, is today I can do all of that with the iPhone in your pocket,” he says.

In our discussion, Sidhu laid out three distinct yet interrelated technology trends that are driving this progress. First is a drop in cost of the kinds of cameras and sensors which can capture an object or space. Second is a cascade of new techniques which make use of artificial intelligence to construct finished 3D assets. And third is the proliferation of computing power, largely driven by GPUs, capable of rendering graphics-intensive objects on devices widely available to consumers.

Lidar scanners are an example of the price-performance improvement in sensors. First popularized as the bulky spinning sensors on top of autonomous vehicles, and priced in the tens of thousands of dollars, lidar made its consumer-tech debut on the iPhone 12 Pro and Pro Max in 2020. The ability to scan a space in the same way driverless cars see the world meant that suddenly anyone could quickly and cheaply generate detailed 3D assets. This, however, was still only available to the wealthiest Apple customers.

One of the industry’s most consequential turning points occurred that same year when researchers at Google introduced neural radiance fields, commonly referred to as NeRFs.

This approach uses machine learning to construct a credible 3D model of an object or space from 2D pictures or video. The neural network “hallucinates” how a full 3D scene would appear, according to Sidhu. It’s a solution to “view synthesis,” a computer graphics challenge seeking to allow someone to see a space from any point of view from only a few source images.

“So that thing came out and everyone realized we’ve now got state-of-the-art view synthesis that works brilliantly for all the stuff photogrammetry has had a hard time with like transparency, translucency, and reflectivity. This is kind of crazy,” he adds.

The computer vision community channeled their excitement into commercial applications. At Google, Sidhu and his team explored using the technology for Immersive View, a 3D version of Google Maps. For the average user, the spread of consumer-friendly applications like Luma AI and others meant that anyone with just a smartphone camera could make photorealistic 3D assets. The creation of high-quality 3D content was no longer limited to Apple’s lidar-elite.

Now, another potentially even more promising method of solving view synthesis is earning attention rivaling that early NeRF excitement. Gaussian splatting is a rendering technique that mimics the way triangles are used for traditional 3D assets, but instead of triangles, it’s a “splat” of color expressed through a mathematical function known as a gaussian. As more gaussians are layered together, a highly detailed and textured 3D asset becomes visible.The speed of adoption for splatting is stunning to watch.

It’s only been a few months but demos are flooding X, and both Luma AI and Polycam are offering tools to generate gaussian splats. Other developers are already working on ways of integrating them into traditional game engines like Unity and Unreal. Splats are also gaining attention from the traditional computer graphics industry since their rendering speed is faster than NeRFs, and they can be edited in ways already familiar to 3D artists. (NeRFs don’t allow this given they’re generated by an indecipherable neural net.)

For a great explanation for how gaussian splatting works and why it’s generating buzz, see this video from Sidhu.

Regardless of the details, for consumers, we are decidedly in a moment where a phone can generate Hollywood-caliber 3D assets that not long ago only well-equipped production teams could produce.

But why does 3D creation even matter at all?

To appreciate the shift toward 3D content, it’s worth noting the technology landscape is orienting toward a future of “spatial computing.” While overused terms like the metaverse might draw eye rolls, the underlying spirit is a recognition that 3D environments, like those used in video games, virtual worlds, and digital twins have a big role to play in our future. 3D assets like the ones produced by NeRFs and splatting are poised to become the content we’ll engage with in the future.

Within this context, a large-scale ambition is the hope for a real-time 3D map of the world. While tools for generating static 3D maps have been available, the challenge remains finding ways of keeping those maps current with an ever-changing world.

“There’s the building of the model of the world, and then there’s maintaining that model of the world. With these methods we’re talking about, I think we might finally have the tech to solve the ‘maintaining the model’ problem through crowdsourcing,” says Sidhu.

Projects like Google’s Immersive View are good early examples of the consumer implications of this. While he wouldn’t speculate when it might eventually be possible, Sidhu agreed that at some point, the technology will exist which would allow a user in VR to walk around anywhere on Earth with a real-time, immersive experience of what is happening there. This type of technology will also spill into efforts in avatar-based “teleportation,” remote meetings, and other social gatherings.

Another reason to be excited, says Sidhu, is 3D memory capture. Apple, for example, is leaning heavily into 3D photo and video for their Vision Pro mixed reality headset. As an example, Sidhu told me he recently created a high-quality replica of his parents’ house before they moved out. He could then give them the experience of walking inside of it using virtual reality.

“Having that visceral feeling of being back there is so powerful. This is why I’m so bullish on Apple, because if they nail this 3D media format, that’s where things can get exciting for regular people.”

From cave art to oil paintings, the impulse to preserve aspects of our sensory experience is deeply human. Just as photography once muscled in on still lifes as a means of preservation, 3D creation tools seem poised to displace our long-standing affair with 2D images and video.

Yet just as photography can only ever hope to capture a fraction of a moment in time, 3D models can’t fully replace our relationship to the physical world. Still, for those experiencing the horrors of war in Ukraine, perhaps these are welcome developments offering a more immersive way to preserve what can never truly be replaced.

Image Credit: Polycam

This Week’s Awesome Tech Stories From Around the Web (Through November 4)

0

GOVERNANCE

The White House Is Preparing for an AI-Dominated Future
Karen Hao and Matteo Wong | The Atlantic
“A year ago, few people could have imagined how chatbots and image generators would change the basic way we think about the internet’s effects on elections, education, labor, or work; only months ago, the deployment of AI in search engines seemed like a fever dream. All of that, and much more in the nascent AI revolution, has begun in earnest. The executive order’s internal conflict over, and openness to, different values and approaches to AI may have been inevitable, then—the result of an attempt to chart a path for a technology when nobody has a reliable map of where it’s going.”

TECH

Everything We Know About Humane’s Bewildering New AI Pin
Lucas Ropek | Gizmodo
“The company, which has received a healthy dose of financial help from prominent companies like OpenAI and Microsoft, has been hyping its new device for months now, promising that it will revolutionize our relationship with computing forever. The device is finally scheduled to drop next week on November 9th. But what is it?

BIOTECH

Panel Says That Innovative Sickle Cell Cure Is Safe Enough for Patients
Gina Kolata | The New York Times
The panel’s conclusion on Tuesday about exa-cel’s safety sends it to the FDA for a decision on greenlighting it for broad patient use. Exa-cel frees patients from the debilitating and painful effects of this chronic, deadly disease. If approved, the Vertex product would be the first medicine to treat a genetic disease with the CRISPR gene-editing technique.”

ETHICS

Sam Bankman-Fried’s Wild Rise and Abrupt Crash
David Streitfeld | The New York Times
Six years ago, Sam Bankman-Fried knew little about alternative currencies. But he correctly bet there were huge opportunities in grabbing a tiny piece of millions of crypto trades. In the blink of an eye, he was lauded as being worth $23 billion. Only Mark Zuckerberg had accumulated so much wealth so young. The Facebook co-founder has his critics, but he looks like Thomas Edison next to Mr. Bankman-Fried. After a speedy trial in Manhattan federal court, the onetime crypto king, now 31, was convicted on Thursday of seven counts of fraud and conspiracy involving his companies FTX and Alameda Research.”

ARTIFICIAL INTELLIGENCE

Forget ChatGPT, Why Llama and Open Source AI Win 2023
Sharon Goldman | VentureBeat
“According to Meta, the open source AI community has fine-tuned and released over 7,000 Llama derivatives on the Hugging Face platform since the model’s release, including a veritable animal farm of popular offspring including Koala, Vicuna, Alpaca, Dolly and RedPajama. …You could consider ChatGPT the equivalent of Barbie, 2023’s biggest blockbuster movie. But Llama and its open source AI cohort are more like the Marvel Universe, with its endless spinoffs and offshoots that have the cumulative power to offer the biggest long-term impact on the AI landscape.

IMPACT

What Are the Hardest Problems in Tech We Should Be More Focused on as a Society?
Editorial Staff | MIT Technology Review
“Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.

TECH

Almost-Unbeatable AI Is Now a Permanent Feature of Gran Turismo 7
Jonathan M. Gitlin | Ars Technica
The entertainment company experimented with [the AI] earlier this year for a few weeks in a limited test, but when the GT7 Spec II update rolls out to consoles tomorrow morning, Sophy will be able to race 340 of the game’s cars on nine of its tracks. Ideally, you want the AI in a racing game to be good enough that the race is a challenge, but perhaps not quite so unbeatable that it’s a fool’s errand to try to win.”

ENVIRONMENT

The Ultra-Efficient Farm of the Future Is in the Sky
Matt Simon | Wired
“This is no ordinary green roof, but a sprawling, sensor-laden outdoor laboratory overseen by horticulturalist Jennifer Bousselot. The idea behind rooftop agrivoltaics is to emulate a forest on top of a building. Just as the shade of towering trees protects the undergrowth from sun-stress, so too can solar panels encourage the growth of plants—the overall goal being to grow more food for ballooning urban populations, all while saving water, generating clean energy, and making buildings more energy efficient.

SPACE

Remains of Planet That Formed the Moon May Be Hiding Near Earth’s Core
John Timmer | Ars Technica
Deep in the Earth’s mantle there are two regions where seismic waves slow down, termed large low-velocity provinces. …Now, a team of scientists has tied the two regions’ existence back to a catastrophic event that happened early in our Solar System’s history: a giant collision with a Mars-sized planet that ultimately created our Moon.”

Image Credit: Pawel Czerwinski / Unsplash

How to Give AI a ‘Gut Feeling’ for Which Molecules Will Make the Best Drugs

0

Intuition and AI make a strange couple.

Intuition is hard to describe. It’s that gut feeling that gnaws at you, even if you don’t know why. We naturally build intuition through experience. Gut feelings aren’t always right; but they often creep into our subconscious to supplement logic and reasoning when making decisions.

AI, in contrast, rapidly learns by digesting millions of cold, hard data points, producing purely analytical—if not always reasonable—results based on its input.

Now, a new study in Nature Communications marries the odd pair, resulting in a machine learning system that captures a chemist’s intuition for drug development.

By analyzing feedback from 35 chemists at Novartis, a pharmaceutical company based in Switzerland, the team developed an AI model that learns from human expertise in a notoriously difficult stage of drug development: finding promising chemicals compatible with our biology.

First, the chemists used their intuition to choose which of 5,000 chemical pairs had a higher chance of becoming a useful drug. From this feedback, a simple artificial neural network learned their preferences. When challenged with new chemicals, the AI model gave each one a score that ranked whether it was worthy for further development as medication.

Without any details on the chemical structures themselves, the AI “intuitively” scored certain structural components, which often occur in existing medications, higher than others. Surprisingly, it also captured nebulous properties not explicitly programmed in previous computer modeling attempts. Paired with a generative AI model, like DALL-E, the robo-chemist designed a slew of new molecules as potential leads.

Many promising drug candidates were based on “collative know-how,” wrote the team.

The study is a collaboration between Novartis and Microsoft Research AI4Science, the latter based in the UK.

Down the Chemical Rabbit Hole

Most of our everyday medicines are made from small molecules—Tylenol for pain, metformin for diabetes management, antibiotics to fight off bacterial infections.

But finding these molecules is a pain.

First, scientists need to understand how the disease works. For example, they decipher the chain of biochemical reactions that give you a pounding headache. Then they find the weakest link in the chain, which is often a protein, and model its shape. Structure in hand, they pinpoint nooks and crannies that molecules can jam into to disrupt the protein’s function, thereby putting a stop to the biological process—voilà, no more headaches.

Thanks to protein prediction AI, such as AlphaFold, RoseTTAFold, and their offshoots, it’s now easier to model the structure of a target protein. Finding a molecule that fits it is another matter. The drug doesn’t just need to alter the target’s activity. It also must be easily absorbed, spread to the target organ or tissue, and be safely metabolized and eliminated from the body.

Here’s where medicinal chemists come in. These scientists are pioneers in the adoption of computer modeling. Over two decades ago, the field began using software to sift enormously large databases of chemicals looking for promising leads. Each potential lead is then evaluated by a team of chemists before further development.

Through this process, medicinal chemists build an intuition that allows them to make decisions efficiently when reviewing promising drug candidates. Some of their training can be distilled into rules for computers to learn—for example, this structure likely won’t pass into the brain; that one could damage the liver. These expert rules have helped with initial screening. But so far, no program can capture the subtleties and intricacies of their decision-making, partly because the chemists can’t explain it themselves.

I’ve Got a Feeling

The new study sought to capture the unexplainable in an AI model.

The team recruited 35 expert chemists at various Novartis centers around the world, each with different expertise. Some work with cells and tissues, for instance, others with computer modeling.

Intuition is hard to measure. It’s also not exactly reliable. As a baseline, the team designed a multiplayer game to gauge if each chemist was consistent in their choices and whether their picks agreed with those of others. Each chemist was shown 220 molecule pairs and asked an intentionally vague question. For example, imagine you’re in an early virtual screening campaign, and we need a drug that can be taken as a pill—which molecule would you prefer?

The goal was to reduce overthinking, pushing the chemists to rely on their intuition for which chemical stays and which goes. This setup differs from usual evaluations, where the chemists check off specific molecular properties with predictive models—that is, hard data.

The chemists were consistent in their own judgment, but didn’t always agree with each other—likely because of differing personal experiences. However, there was enough overlap to form an underlying pattern an AI model could learn from, explained the team.

They next built up the dataset to 5,000 molecule pairs. The molecules, each labeled with information on its structure and other features, were used to train a simple artificial neural network. With training, the AI network further adjusted its inner workings based on feedback from the chemists, eventually giving each molecule a score.

As a sanity check, the team tested the model on chemical pairs different from those in its training dataset. As they increased the number of training samples, performance shot up.

While earlier computer programs have relied on rules for what makes a promising medicine based on molecular structure, the new model’s scores didn’t directly reflect any of these rules. The AI captured a more holistic view of a chemical—a totally different approach to drug discovery than that used in classic robo-chemist software.

Using the AI, the team then screened hundreds of FDA-approved drugs and thousands of molecules from a chemical databank. Even without explicit training, the model extracted chemical structures—called “fragments”—that are more amenable to further development as medicines. The AI’s scoring preferences matched those of existing drug-like molecules, suggesting it had grasped the gist of what makes a potential lead.

Chemical Romance

Novartis isn’t the first company to explore a human-robot chemical romance.

Previously, the pharmaceutical company Merck also tapped into their in-house expertise to rank chemicals for a desirable trait. Outside the industry, a team at the University of Glasgow explored using intuition-based robots for inorganic chemical experiments.

It’s still a small study, and the authors can’t rule out human fallacies. Some chemists might choose a molecule based on personal biases that are hard to completely avoid. However, the setup could be used to study other steps in drug discovery that are expensive to complete experimentally. And while the model is based on intuition, its results could be bolstered by rule-based filters to further improve its performance.

We’re in an era where machine learning can design tens of thousands of molecules, explained the team. An assistant AI chemist, armed with intuition, could help narrow down candidates at the critical early stage of drug discovery, and in turn, accelerate the whole process.

Image Credit: Eugenia Kozyr / Unsplash

Scientists Fire Up the World’s Largest Fusion Reactor for the First Time

0

Fusion power startups have attracted considerable attention and investment in the last few years. But the powering up of the world’s largest fusion reactor in Japan shows long-term, government-run projects still have a lead.

Last week, scientists working on the JT-60SA experimental reactor at the National Institutes for Quantum Science and Technology in the city of Naka achieved “first plasma,” according to Science. That effectively means the machine was successfully switched on but is still a long way from carrying out meaningful tests or producing any power.

Nonetheless, it’s a significant milestone for a reactor meant to pave the way for the much larger ITER reactor being built in France, which is expected to be the first of its kind to generate more power than it uses. Both projects are part of a 2007 deal reached between Japan and the EU to cooperate on fusion research, and lessons learned from operating JT-60SA will guide the development of ITER.

The reactor follows a well-established design known as a tokamak, which features a doughnut-shaped chamber surrounded by coiled superconducting magnets. These magnets are used to generate powerful magnetic fields able to contain an extremely hot cloud of ionized gas known as a plasma. In this case, the plasma is made of hydrogen and its isotope deuterium.

When the temperatures get high enough, the atoms in the plasma fuse together, generating huge amounts of energy in the form of radiation and heat. This is absorbed by the walls of the reactor and used to turn water into steam that can drive a turbine to create electricity.

The JT-60SA is 15.5 meters tall and can hold 135 cubic meters of plasma, which makes it the largest tokamak built to date, but it’s still a long way from functioning as a power plant. As with its predecessors, achieving fusion will require significantly more power than the reaction generates.

But the new reactor isn’t supposed to reach energy breakeven. Its mission is to act as a test bed for ITER, which is currently under construction in Cadarache in southern France, by helping investigate plasma stability and how it affects power output. ITER will be nearly twice as tall as JT-60SA and capable of holding 830 cubic meters of plasma.

Once it is fully up and running, ITER is expected to generate 500 megawatts of power from its plasma while using only 50 megawatts to heat it up. It isn’t designed to generate electricity from that power, but achieving this kind of energy gain would be a crucial milestone on the road to commercial fusion power plants.

The JT-60SA reactor is expected to reach full power in the next two years, while ITER is aiming for first plasma by 2025 and full operation by 2035. But both projects have experienced significant delays and had to regularly update their timelines, contributing to fusion power’s reputation as a technology that’s perpetually 20 years away.

In the meantime, a new crop of fusion power startups has emerged with much more aggressive schedules. Companies like Commonwealth Fusion Systems think they could have a working fusion power plant up and running by the early 2030s, and Helion Energy has signed an energy purchase agreement with Microsoft to start supplying electricity as early as 2028.

These companies are betting they can overtake the more ponderous government-run initiatives that have been making slow and steady process over decades. Whether these ambitious goals pan out remains to be seen, and it’s worth remembering the only facility to achieve a net energy gain in a fusion reaction so far is the Lawrence Livermore National Laboratory.

But having both private and public investment in fusion power can only be a good thing. The more people working on the problem, the faster it’s likely to get solved.

Image Credit: Engin AkyurtPixabay

Amazon Delivery Drones: How the Sky Could Be the Limit for Market Dominance

0

Amazon’s latest plan to use drones to deliver packages in the UK by the end of 2024 is essentially a relaunch. It was 10 years ago that the company’s founder Jeff Bezos first announced it would fly individual packages through the sky.

Three years later, an impressive promotional video revealed that the project was starting out in the British city of Cambridge. But by 2021, the operation appeared to have come to an abrupt halt.

Now it seems the company was undeterred by that pause. The dream of sending drones to UK homes bearing (not very heavy) items that we cannot wait more than 30 minutes to have is back in play. So, will it work this time?

In the US, progress has been sluggish. Amazon managed a grand total of 100 deliveries in May 2023, in two locations. (At one of these locations, in Texas, the company has to pause operations when the temperature gets too high).

Despite this, Amazon plans to launch delivery drones in two new areas—one in the UK and one in Italy (precise locations are yet to be disclosed). It has a new model of drone and a vast logistical network at its disposal.

Aside from these key factors, Amazon may well have been inspired by other companies in the sector. The most obvious example is drone delivery of vital medical supplies.

Zipline started delivering blood and medicine to remote places in Rwanda and has now expanded to Ghana and the US state of North Carolina. Other companies such as UPS and Google’s Wing have started offering similar services.

But what these success stories have in common is that they are cost-efficient—pharmaceutical products weigh little and are typically expensive enough to justify the use of a drone—and they are focused on areas which are not densely populated.

In contrast, Amazon’s own estimates put the cost of delivering a single package at $484 today, which it expects to reduce to $63 by 2025. Offering customers free or cheap drone delivery will be extremely expensive.

Amazon’s solution to this is likely to be the same one it has used so successfully over the last two decades: increasing the scale of its operation. After all, at the start of the century, many wondered how e-commerce could ever be profitable. Now, millions of people buy from Amazon, and that vast number of customers is key to its success.

But Amazon’s business plan seems to rely on dominating the market. And for air deliveries, this means not only dropping packages in rural areas, but being available in cities where more than half the world’s population live.

While it may be easy to convince the residents of a small, low-density area to trial boxes of toothpaste and mouthwash landing in their gardens, it might be much more difficult to persuade residents of apartment buildings to accept drones flying past their windows carrying their neighbor’s delivery of dog biscuits.

Added to this are the laws regulating the use of drones. In the UK, for example, you are not allowed to fly one over congested areas or within 50 meters “of a person, vehicle or building not under your control.”

The Higher They Fly, the Harder They Fall

Cities will not simply let commercial drones take to the skies—at least not without charging for the nuisance they generate. They will either ban drones in densely populated areas, or seek further regulation.

If regulation is the route taken, a new hurdle arises which is similar to the allocation of radio waves or mobile phone network licenses—that there will only be enough space for a few operators (sometimes just one).

This allocation usually happens through a bidding process. And studies of auctions of telecom licenses show the importance of involving multiple credible operators. But having different firms winning the right to deliver in different cities could easily reduce the level of reach that Amazon would need to succeed.

An alternative scenario would see a single operator in charge of all drone deliveries. But this raises a familiar economic problem, where natural monopolies emerge in sectors like water provision or other kinds of infrastructure.

For, while society can often benefit from the innovation potential of the private sector, having only one firm in the market opens up the possibility of abuse. For instance, the privatization of water in the UK has come with a regulator which chooses the prices companies can charge and never-ending debates on the regulation of sewage and leakages.

Regardless of which company is awarded the business, external regulation usually involves a requirement to treat all consumers fairly and equally—which would mean charging Amazon the same price as its competitors to use the drones.

But fairness and equality are not the goals big companies are interested in when they invest heavily in innovative technology. Their goal is to obtain or keep a dominant position in the market.

Amazon’s current dominance largely relies on its superior logistical operation: it can deliver quickly, cheaply, and reliably everywhere. With drone delivery available to other platforms at the same price, Amazon would lose this competitive advantage. So, if it does manage a successful launch this time around, it could well come at the expense of its current dominance as a logistical operation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Dose Media / Unsplash 

Like Humans, This Breakthrough AI Makes Concepts Out of the Words It Learns

0

Prairie dogs are anything but dogs. With a body resembling a Hershey’s Kiss and a highly sophisticated chirp for communications, they’re more hamster than golden retriever.

Humans immediately get that prairie dogs aren’t dogs in the usual sense. AI struggles.

Even as toddlers, we have an uncanny ability to turn what we learn about the world into concepts. With just a few examples, we form an idea of what makes a “dog” or what it means to “jump” or “skip.” These concepts are effortlessly mixed and matched inside our heads, resulting in a toddler pointing at a prairie dog and screaming, “But that’s not a dog!”

Last week, a team from New York University created an AI model that mimics a toddler’s ability to generalize language learning. In a nutshell, generalization is a sort of flexible thinking that lets us use newly learned words in new contexts—like an older millennial struggling to catch up with Gen Z lingo.

When pitted against adult humans in a language task for generalization, the model matched their performance. It also beat GPT-4, the AI algorithm behind ChatGPT.

The secret sauce was surprisingly human. The new neural network was trained to reproduce errors from human test results and learn from them.

“For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” said study author Dr. Brenden Lake. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”

A Brainy Feud

Most AI models rely on deep learning, a method loosely based on the brain.

The idea is simple. Artificial neurons interconnect to form neural networks. By changing the strengths of connections between artificial neurons, neural networks can learn many tasks, such as driving autonomous taxis or screening chemicals for drug discovery.

However, neural networks are even more powerful in the brain. The connections rapidly adapt to ever-changing environments and stitch together concepts from individual experiences and memories. As an example, we can easily identify a wild donkey crossing the road and know when to hit the brakes. A robot car may falter without wild-donkey-specific training.

The pain point is generalization. For example: What is a road? Is it it a paved highway, rugged dirt path, or hiking trail surrounded by shrubbery?

Back in the 1980s, cognitive scientists Jerry Fodor and Zenon Pylyshyn famously proposed that artificial neural networks aren’t capable of understanding concepts—such as a “road”—much less flexibly using them to navigate new scenarios.

The scientists behind the new study took the challenge head on. Their solution? An artificial neural network that’s fine-tuned on human reactions.

Man With Machine

As a baseline, the team first asked 25 people to learn a new made-up language. Compared to using an existing one, a fantasy language prevents bias when testing human participants.

The research went “beyond classic work that relied primarily on thought experiments” to tap into human linguistic abilities, the authors explained in their study. The test differed from previous setups that mostly focused on grammar. Instead, the point was for participants to understand and generalize in the made-up language from words alone.

Like they were teaching a new language, the team started with a bunch of simple nonsense words: “dax,” “lug,” “wif,” or “zup.” These translate as basic actions such as skipping or jumping.

The team then introduced more complex words, “blicket” or “kiki,” that can be used to string the previous words together into sentences—and in turn, concepts and notions. These abstract words, when used with the simple words, can mean “skip backwards” or “hop three times.”

The volunteers were trained to associate each word with a color. For example, “dax” was red, “lug” was blue. The colors helped the volunteers learn rules of the new language. One word combination resulted in three red circles, another flashed blue. But importantly, some words, such as “fep,” lit up regardless of other words paired with it—suggesting a grammatical basis in the fantasy language.

After 14 rounds of learning, the volunteers were challenged with 10 questions about the meaning of the made-up words and asked to generalize to more complex questions. For each task, the participants had to select the corresponding color circles and place them in the appropriate order to form a phrase.

They excelled. The humans picked the correct colors roughly 80 percent of the time. Many of the errors were “one-to-one” translation problems, which translated a word to its basic meaning without considering the larger context.

A second team of 29 more people also rapidly learned the fantasy language, translating combinations such as “fep fep” without trouble.

Language Learned

To build the AI model, the team focused on several criteria.

One, it had to generalize from just a few instances of learning. Two, it needed to respond like humans to errors when challenged with similar tasks. Finally, the model had to learn and easily incorporate words into its vocabulary, forming a sort of “concept” for each word.

To do this, the team used meta-learning for compositionality. Yes, it sounds like a villain’s superpower. But what it does is relatively simple.

The team gave an artificial neural network tasks like the ones given to the human volunteers. The network is optimized as dynamic “surges” change its overall function, allowing it to better learn on the fly compared to standard AI approaches, which rely on static data sets. Usually, these machines process a problem using a set of study examples. Think of it as deciphering Morse code. They receive a message—dots and dashes—and translate the sequence into normal English.

But what if the language isn’t English, and it has its own concepts and rules? A static training set would fail the AI wordsmith.

Here, the team guided the AI through a “dynamic stream” of tasks that required the machine to mix-and-match concepts. In one example, it was asked to skip twice. The AI model independently learned the notion of “skip”—as opposed to “jump”—and that twice means “two times.” These learnings were then fed through the neural network, and the resulting behavior was compared to the instruction. If, say, the AI model skipped three times, the results provided feedback to help nudge the AI model towards the correct response. Through repetition, it eventually learned to associate different concepts.

Then came the second step. The team added a new word, say, “tiptoe,” into a context the AI model had already learned, like movement, and then asked it to “tiptoe backwards.” The model now had to learn to combine “tiptoe” into its existing vocabulary and concepts of movement.

To further train the AI, the team fed it data from the human participants so it might learn from human errors. When challenged with new puzzles, the AI mimicked human responses in 65 percent of the trials, outperforming similar AI models—and in some cases, beating human participants.

The model raises natural questions for the future of language AI, wrote the team. Rather than teaching AI models grammar with examples, giving them a broader scope might help them mimic children’s ability to grasp languages by combining different linguistic components.

Using AI can help us understand how humans have learned to combine words into phrases, sentences, poetry, and essays. The systems could also lead to insights into how children build their vocabulary, and in turn, form a gut understanding of concepts and knowledge about the world. Language aside, the new AI model could also help machines parse other fields, such as mathematics, logic, and even, in a full circle, computer programming.

“It’s not magic, it’s practice. Much like a child also gets practice when learning their native language, the models improve their compositional skills through a series of compositional learning tasks,” Lake told Nature.

Image Credit: Andreas Fickl / Unsplash 

Why Google and Bing’s Embrace of Generative AI Could Upend the SEO Industry

0

Google, Microsoft, and others boast that generative artificial intelligence tools like ChatGPT will make searching the internet better than ever for users. For example, rather than having to wade through a sea of URLs, users will be able to just get an answer combed from the entire internet.

There are also some concerns with the rise of AI-fueled search engines, such as the opacity over where information comes from, the potential for “hallucinated” answers, and copyright issues.

But one other consequence is that I believe it may destroy the $68 billion search engine optimization industry that companies like Google helped create.

For the past 25 years or so, websites, news outlets, blogs, and many others with a URL that wanted to get attention have used search engine optimization, or SEO, to “convince” search engines to share their content as high as possible in the results they provide to readers. This has helped drive traffic to their sites and has also spawned an industry of consultants and marketers who advise on how best to do that.

As an associate professor of information and operations management, I study the economics of e-commerce. I believe the growing use of generative AI will likely make all of that obsolete.

How Online Search Works

Someone seeking information online opens her browser, goes to a search engine and types in the relevant keywords. The search engine displays the results, and the user browses through the links displayed in the result listings until she finds the relevant information.

To attract the user’s attention, online content providers use various search engine marketing strategies, such as search engine optimization, paid placements, and banner displays.

For instance, a news website might hire a consultant to help it highlight key words in headlines and in metadata so that Google and Bing elevate its content when a user searches for the latest information on a flood or political crisis.

How Generative AI Changes the Search Process

But this all depends on search engines luring tens of millions of users to their websites. And so to earn users’ loyalty and web traffic, search engines must continuously work on their algorithms to improve the quality of their search results.

That’s why, even if it could hurt a part of their revenue stream, search engines have been quick to experiment with generative AI to improve search results. And this could fundamentally change the online search ecosystem.

All the biggest search engines have already adopted or are experimenting with this approach. Examples include Google’s Bard, Microsoft’s Bing AI, Baidu’s ERNIE, and DuckDuckGo’s DuckAssist.

Rather than getting a list of links, both organic and paid, based on whatever keywords or questions a user types in, generative AI will instead simply give you a text result in the form of an answer. Say you’re planning a trip to Destin, Florida, and type the prompt “Create a three-day itinerary for a visitor” there. Instead of a bunch of links to Yelp and blog postings that require lots of clicking and reading, typing this prompt into Bing AI will result in a detailed three-day itinerary.

two screenshots side by side of Bing and Bing AI searches of the same prompt
Side-by-side comparison of search results in regular Bing and the AI version from the prompt: ‘Create a 3-day itinerary for a visitor to Destin Florida.’ Image Credit: Microsoft Bing

Over time, as the quality of AI-generated answers improve, users will have less incentive to browse through search result listings. They can save time and effort by reading the AI-generated response to their query.

In other words, it would allow you to bypass all those paid links and costly efforts by websites to improve their SEO scores, rendering them useless.

When users start ignoring the sponsored and editorial result listings, this will have an adverse impact on the revenues of SEO consultants, search engine marketers consultants and, ultimately, the bottom line of search engines themselves.

The Financial Impact

This financial impact cannot be ignored.

For example, the SEO industry generated $68.1 billion globally in 2022. It had been expected to reach $129.6 billion by 2030, but these projections were made before the emergence of generative AI put the industry at risk of obsolescence.

As for search engines, monetizing online search services is a major source of their revenue. They get a cut of the money that websites spend on improving their online visibility through paid placements, ads, affiliate marketing, and the like, collectively known as search engine marketing. For example, approximately 58% of Google’s 2022 revenues—or almost $162.5 billion—came from Google Ads, which provides some of these services.

Search engines run by massive companies with many revenue streams, like Google and Microsoft, will likely find ways to offset the losses by coming up with strategies to make money off generative AI answers. But the SEO marketers and consultants who depend on search engines—mostly small- and medium-sized companies—will no longer be needed as they are today, and so the industry is unlikely to survive much longer.

A Not-Too-Distant Future

But don’t expect the SEO industry to fade away immediately. Generative AI search engines are still in their infancy and must address certain challenges before they’ll dominate search.

For one thing, most of these initiatives are still experimental and often available only to certain users. And for another, generative AI has been notorious for providing incorrect, plagiarized, or simply made-up answers.

That means it’s unlikely at the moment to gain the trust or loyalty of many users.

Given these challenges, it is not surprising that generative AI has yet to transform online search. However, given the resources available to researchers working on generative AI models, it is safe to assume that eventually these models will become better at their task, leading to the death of the SEO industry.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Gerd AltmannPixabay

This Week’s Awesome Tech Stories From Around the Web (Through October 28)

0

FUTURE

Ilya Sutskever, OpenAI’s Chief Scientist, on His Hopes and Fears for the Future of AI
Will Douglas Heaven | MIT Technology Review
A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago. As he tells me himself, ChatGPT has already rewritten a lot of people’s expectations about what’s coming, turning ‘will never happen’ into ‘will happen faster than you think.’ ‘It’s important to talk about where it’s all headed,’ he says, before predicting the development of artificial general intelligence (by which he means machines as smart as humans) as if it were as sure a bet as another iPhone.”

BIOTECH

Three People Were Gene-Edited in an Effort to Cure Their HIV. The Result Is Unknown.
Antonio Regalado | MIT Technology Review
In a remarkable experiment, a biotechnology company called Excision BioTherapeutics says it added the gene-editing tool to the bodies of three people living with HIV and commanded it to cut, and destroy, the virus wherever it is hiding. The early-stage study is a probing step toward the company’s eventual goal of curing HIV infection with a single intravenous dose of a gene-editing drug.”

ROBOTICS

Boston Dynamics Uses ChatGPT to Create a Robot Tour Guide
Trevor Mogg | Digital Trends
“The best part is how Spot behaves when instructed to adopt different personalities. Check out the British butler guide at the start of the video, for example, and the sarcastic guide a few minutes in. The Shakespearean actor is also very impressive. …The software engineer [Matt Klingensmith] said he was also surprised by some of the responses. For example, when he asked Spot to show him its parents, the robot took him over to an early version of Spot among Boston Dynamics’ display of robots.”

FUTURE

People Are Speaking With ChatGPT for Hours, Bringing 2013’s Her Closer to Reality
Benj Edwards | Ars Technica
“In the film, Joaquin Phoenix’s character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016. In reality, ChatGPT isn’t as situationally aware as Samantha was in the film, does not have a long-term memory, and OpenAI has done enough conditioning on ChatGPT to keep conversations from getting too intimate or personal. But that hasn’t stopped people from having long talks with the AI assistant to pass the time anyway.”

CRYPTOCURRENCY

They Cracked the Code to a Locked USB Drive Worth $235 Million in Bitcoin. Then It Got Weird
Andy Greenberg | Wired
“Stefan Thomas lost the password to an encrypted USB drive holding 7,002 bitcoins. One team of hackers believes they can unlock it—if they can get Thomas to let them. …Thomas had already made a ‘handshake deal’ with two other cracking teams a year earlier, he explained. …’We cracked the IronKey,’ says Nick Fedoroff, Unciphered’s director of operations. ‘Now we have to crack Stefan. This is turning out to be the hardest part.’i

AUTOMATION

You Can Now Order a Waymo Robotaxi on the Uber App in Phoenix
Nikki Main | Gizmodo
Riders who book an UberX, Uber Green, Uber Comfort, or Uber Comfort Electric could be paired with an autonomous Waymo vehicle. The Waymo ride option is currently available in Metro Phoenix and will only match a rider with a driverless vehicle if the route is part of Waymo’s operating territory. Riders who are dubious about getting in a Waymo vehicle will have the option to opt-out before the robotaxi is sent and instead send them with a human driver.”

REGULATION

Cruise, GM’s Robotaxi Service, Suspends All Driverless Operations Nationwide
Staff | Associated Press
“Cruise, the autonomous vehicle unit owned by General Motors, is suspending driverless operations nationwide days after regulators in California found that its driverless cars posed a danger to public safety. …’We have decided to proactively pause driverless operations across all of our fleets while we take time to examine our processes, systems, and tools and reflect on how we can better operate in a way that will earn public trust,’ Cruise wrote on X, the platform formerly known as Twitter, on Thursday night.”

COMPUTING

Scientists Accidentally Created Material for Superfast Computer Chips
Kiona Smith | Inverse
“Chemists say their new semiconductor could cut processing speeds to femtoseconds, but there’s a catch. …Rhenium, a key ingredient in Re6Se8Cl2, is one of the rarest elements on Earth. Making computer chips out of it is always going to be much too expensive to even think about. But Tulyag and his colleagues say they’ve learned enough from Re6Se8Cl2 and its unusual properties to go looking for other materials that could do the same thing—in much more reasonable quantities.”

ENERGY

Energy Agency Sees Peaks in Global Oil, Coal and Gas Demand by 2030
Brad Plumer | The New York Times
For more than a century, the world’s appetite for fossil fuels has been expanding relentlessly, as humans have continued burning larger amounts of coal, oil and natural gas almost every year to power homes, cars and factories. But a remarkable shift may soon be at hand. The world’s leading energy agency now predicts that global demand for oil, natural gas and coal will peak by 2030, partly driven by policies that countries have already adopted to promote cleaner forms of energy and transportation.”

TRANSPORTATION

China Gives Ehang the First Industry Approval for Fully Autonomous, Passenger-Carrying Air Taxis
Evelyn Chang | CNBC
Guangzhou-based Ehang on Friday said it received an airworthiness ‘type certificate’ from the Civil Aviation Administration of China for its fully autonomous drone, the EH216-S AAV, that carries two human passengers. US-listed Ehang claims it’s the first in the world to get such a certificate. ‘Next year we should start to expand overseas,’ Ehang CEO Huazhi Hu said in an interview, via a CNBC translation of his Mandarin-language remarks.”

TECH

Google Fiber Is Getting Outrageously Fast 20Gbps Service
Ron Amadeo | Ars Technica
As always with Google Fiber, this is a symmetrical connection with 20Gbps down and up, so you can create content, like posting a YouTube video, in a flash. …I live in a bandwidth desert ruled by the local broadband monopoly, Comcast, and this is 1,000 times more upload speed than the nearly 20Mbps upload Comcast will sell me.”

Image Credit: Alex Shuper / Unsplash

AI in the C-Suite? Why We’ll Need New Laws to Govern AI Agents in Business

0

The prospect of companies or other organizations run by AI looks increasingly plausible. Researchers say we need to update our laws and the way we train AI to account for this eventuality.

Recent breakthroughs in large language models are forcing us to reimagine what capabilities are purely human. There is some debate around whether these algorithms can understand and reason the same way humans do, but in a growing number of cognitive tasks, they can achieve similar or even better performance than most humans.

This is driving efforts to incorporate these new skills into all kinds of business functions, from writing marketing copy to summarizing technical documents or even assisting with customer support. And while there may be fundamental limits to how far up the corporate ladder AI can climb, taking on managerial or even C-suite roles is creeping into the realm of possibility.

That’s why legal experts in AI are now calling for us to adapt laws given the possibility of AI-run companies and for developers of the technology to alter how they train AI to make the algorithms law-abiding in the first place.

“A legal singularity is afoot,” Daniel Gervais, a law professor at Vanderbilt University, and John Nay, an entrepreneur and Stanford CodeX fellow, write in an article in Science. “For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new ‘species’ of legal subjects.”

While non-human entities like rivers or animals have sometimes been granted the status of legal subjects, the authors write, one of the main barriers to their full participation in the law is the inability to use or understand language. With the latest batch of AI, that barrier has either been breached already or will be soon, depending on who you ask.

This opens the prospect, for the first time, of non-human entities directly interacting with the law. Indeed, the authors point out that lawyers already use AI-powered tools to help them do their jobs, and recent research has shown that LLMs can carry out a wide range of legal reasoning tasks.

And while today’s AI is still far from being able to run a company by itself, they highlight that in some jurisdictions there are no rules requiring that a corporation be overseen by humans, and the idea of an AI managing the affairs of a business is not explicitly barred by law.

If such an AI company were to arise, it’s not entirely clear how the courts would deal with it. The two most common consequences for breaches of the law are financial penalties and imprisonment, which do not translate particularly well to a piece of disembodied software.

While banning AI-controlled companies is a possibility, the authors say it would require massive international legislative coordination and could stifle innovation. Instead, they argue that the legal system should lean into the prospect and work out how best to deal with it.

One important avenue is likely to be coaxing AI to be more law-abiding. This could be accomplished by training a model to predict which actions are consistent with particular legal principles. That model could then teach other models, trained for different purposes, how to take actions in line with the law.

The ambiguous nature of the law, which can be highly contextual and often must be hashed out in court, makes this challenging. But the authors call for training methods that imbue what they call “spirit of the law” into algorithms rather than more formulaic rules about how to behave in different situations.

Regulators could make this kind of training for AIs a legal requirement, and the authorities could also develop their own AI designed to monitor the behavior of other models to ensure they’re in compliance with the law.

While the researchers acknowledge some may scoff at the idea of allowing AIs to directly control companies, they argue it’s better to bring them into the fold early so we can work through potential challenges.

“If we don’t proactively wrap AI agents in legal entities that must obey human law, then we lose considerable benefits of tracking what they do, shaping how they do it, and preventing harm,” they write.

Image Credit: Shawn SuttlePixabay

Carl Sagan Detected Life on Earth 30 Years Ago—Here’s Why His Experiment Still Matters Today

0

It’s been 30 years since a group of scientists led by Carl Sagan found evidence for life on Earth using data from instruments on board NASA’s Galileo robotic spacecraft. Yes, you read that correctly. Among his many pearls of wisdom, Sagan was famous for saying that science is more than a body of knowledge—it is a way of thinking.

In other words, how humans go about the business of discovering new knowledge is at least as important as the knowledge itself. In this vein, the study was an example of a “control experiment”—a critical part of the scientific method. This can involve asking whether a given study or method of analysis is capable of finding evidence for something we already know.

Suppose one were to fly past Earth in an alien spacecraft with the same instruments on board as Galileo had. If we knew nothing else about Earth, would we be able to unambiguously detect life here, using nothing but these instruments (which wouldn’t be optimized to find it)? If not, what would that say about our ability to detect life anywhere else?

Galileo launched in October 1989 on a six-year flight to Jupiter. However, Galileo had to first make several orbits of the inner solar system, making close flybys of Earth and Venus, in order to pick up enough speed to reach Jupiter.

In the mid-2000s, scientists took samples of dirt from the Mars-like environment of Chile’s Atacama desert on Earth, which is known to contain microbial life. They then used similar experiments as those used on the NASA Viking spacecraft (which aimed to detect life on Mars when they landed there in the 1970s) to see if life could be found in Atacama.

They failed—the implication being that had the Viking spacecraft landed on Earth in the Atacama Desert and performed the same experiments as they did on Mars, they might well have missed signatures for life, even though it is known to be present.

Galileo Results

Galileo was kitted out with a variety of instruments designed to study the atmosphere and space environment of Jupiter and its moons. These included imaging cameras, spectrometers (which break down light by wavelength), and a radio experiment.

Importantly, the authors of the study did not presume any characteristics of life on Earth ab initio (from the beginning), but attempted to derive their conclusions just from the data. The near infrared mapping spectrometer (NIMS) instrument detected gaseous water distributed throughout the terrestrial atmosphere, ice at the poles, and large expanses of liquid water “of oceanic dimensions.” It also recorded temperatures ranging from -30°C to +18°C.

Image taken by the Galileo spacecraft at a distance of 2.4 million km.
Can you see us? Galileo image. Image Credit: NASA

Evidence for life? Not yet. The study concluded that the detection of liquid water and a water weather system was a necessary, but not sufficient argument.

NIMS also detected high concentrations of oxygen and methane in the Earth’s atmosphere, as compared to other known planets. Both of these are highly reactive gases that would rapidly react with other chemicals and dissipate in a short period of time. The only way for such concentrations of these species to be upheld were if they were continuously replenished by some means—again suggesting, but not proving, life. Other instruments on the spacecraft detected the presence of an ozone layer, shielding the surface from damaging UV radiation from the sun.

One might imagine that a simple look through the camera might be enough to spot life. But the images showed oceans, deserts, clouds, ice, and darker regions in South America which, only with prior knowledge, we know of course to be rain forests. However, once combined with more spectrometry, a distinct absorption of red light was found to overlay the darker regions, which the study concluded was “strongly suggestive” of light being absorbed by photosynthetic plant life. No minerals were known to absorb light in exactly this fashion.

The highest resolution images taken, as dictated by the flyby geometry, were of the deserts of central Australia and the ice sheets of Antarctica. Hence none of the images taken showed cities or clear examples of agriculture. The spacecraft also flew by the planet at closest approach during the daytime, so lights from cities at night were not visible either.

Of greater interest though was Galileo’s plasma wave radio experiment. The cosmos is full of natural radio emission, however most of it is broadband. That is to say, the emission from a given natural source occurs across many frequencies. Artificial radio sources, by contrast, are produced in a narrow band: an everyday example is the meticulous tuning of an analog radio required to find a station amidst the static.

An example of natural radio emission from aurora in Saturn’s atmosphere can be heard below. The frequency changes rapidly—unlike a radio station.

Galileo detected consistent narrowband radio emission from Earth at fixed frequencies. The study concluded this could only have come from a technological civilization, and would only be detectable within the last century. If our alien spacecraft had made the same flyby of Earth at any time in the few billion years prior to the 20th century then it would have seen no definitive evidence of a civilization on Earth at all.

It is perhaps no surprise then that, as yet, no evidence for extraterrestrial life has been found. Even a spacecraft flying within a few thousand kilometers of human civilization on Earth is not guaranteed to detect it. Control experiments like this are therefore critical in informing the search for life elsewhere.

In the present era, humanity has now discovered over 5,000 planets around other stars, and we have even detected the presence of water in the atmospheres of some planets. Sagan’s experiment shows this is not enough by itself.

A strong case for life elsewhere will likely require a combination of mutually supporting evidence, such as light absorption by photosynthesis-like processes, narrowband radio emission, modest temperatures and weather, and chemical traces in the atmosphere which are hard to explain by non-biological means. As we move into the era of instruments such as the James Webb space telescope, Sagan’s experiment remains as informative now as it was 30 years ago.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Earth and moon as seen by the Galileo spacecraft / NASA

Atom Computing Says Its New Quantum Computer Has Over 1,000 Qubits

0

The scale of quantum computers is growing quickly. In 2022, IBM took the top spot with its 433-qubit Osprey chip. Yesterday, Atom Computing announced they’ve one-upped IBM with a 1,180-qubit neutral atom quantum computer.

The new machine runs on a tiny grid of atoms held in place and manipulated by lasers in a vacuum chamber. The company’s first 100-qubit prototype was a 10-by-10 grid of strontium atoms. The new system is a 35-by-35 grid of ytterbium atoms (shown above). (The machine has space for 1,225 atoms, but Atom has so far run tests with 1,180.)

Quantum computing researchers are working on a range of qubits—the quantum equivalent of bits represented by transistors in traditional computing—including tiny superconducting loops of wire (Google and IBM), trapped ions (IonQ), and photons, among others. But Atom Computing and other companies, like QuEra, believe neutral atoms—that is, atoms with no electric charge—have greater potential to scale.

This is because neutral atoms can maintain their quantum state longer, and they’re naturally abundant and identical. Superconducting qubits are more susceptible to noise and manufacturing flaws. Neutral atoms can also be packed more tightly into the same space as they have no charge that might interfere with neighbors and can be controlled wirelessly. And neutral atoms allow for a room-temperature set-up, as opposed to the near-absolute zero temperatures required by other quantum computers.

The company may be onto something. They’ve now increased the number of qubits in their machine by an order of magnitude in just two years, and believe they can go further. In a video explaining the technology, Atom CEO Rob Hays says they see “a path to scale to millions of qubits in less than a cubic centimeter.”

“We think that the amount of challenge we had to face to go from 100 to 1,000 is probably significantly higher than the amount of challenges we’re gonna face when going to whatever we want to go to next—10,000, 100,000,” Atom cofounder and CTO Ben Bloom told Ars Technica.

But scale isn’t everything.

Quantum computers are extremely finicky. Qubits can be knocked out of quantum states by stray magnetic fields or gas particles. The more this happens, the less reliable the calculations. Whereas scaling got a lot of attention a few years ago, the focus has shifted to error-correction in service of scale. Indeed, Atom Computing’s new computer is bigger, but not necessarily more powerful. The whole thing can’t yet be used to run a single calculation, for example, due to the accumulation of errors as the qubit count rises.

There has been recent movement on this front, however. Earlier this year, the company demonstrated the ability to check for errors mid-calculation and potentially fix those errors without disturbing the calculation itself. They also need to keep errors to a minimum overall by increasing the fidelity of their qubits. Recent papers, each showing encouraging progress in low-error approaches to neutral atom quantum computing, give fresh life to the endeavor. Reducing errors may be, in part, an engineering problem that can be solved with better equipment and design.

“The thing that has held back neutral atoms, until those papers have been published, have just been all the classical stuff we use to control the neutral atoms,” Bloom said. “And what that has essentially shown is that if you can work on the classical stuff—work with engineering firms, work with laser manufacturers (which is something we’re doing)—you can actually push down all that noise. And now all of a sudden, you’re left with this incredibly, incredibly pure quantum system.”

In addition to error-correction in neutral atom quantum computers, IBM announced this year they’ve developed error correction codes for quantum computing that could reduce the number of necessary qubits needed by an order of magnitude.

Still, even with error-correction, large-scale, fault-tolerant quantum computers will need hundreds of thousands or millions of physical qubits. And other challenges—such as how long it takes to move and entangle increasingly large numbers of atoms—exist too. Better understanding and working to solve these challenges is why Atom Computing is chasing scale at the same time as error-correction.

In the meantime, the new machine can be used on smaller problems. Bloom said if a customer is interested in running a 50-qubit algorithm—the company is aiming to offer the computer to partners next year—they’d run it multiple times using the whole computer to arrive at a reliable answer more quickly.

In a field of giants like Google and IBM, it’s impressive a startup has scaled their machines so quickly. But Atom Computing’s 1,000-qubit mark isn’t likely to stand alone for long. IBM is planning to complete its 1,121-qubit Condor chip later this year. The company is also pursuing a modular approach—not unlike the multi-chip processors common in laptops and phones—where scale is achieved by linking many smaller chips.

We’re still in the nascent stages of quantum computing. The machines are useful for research and experimentation but not practical problems. Multiple approaches making progress in scale and error correction—two of the field’s grand challenges—is encouraging. If that momentum continues in the coming years, one of these machines may finally solve the first useful problem that no traditional computer ever could.

Image Credit: Atom Computing

This Brain-Like IBM Chip Could Drastically Cut the Cost of AI

0

The brain is an exceptionally powerful computing machine. Scientists have long tried to recreate its inner workings in mechanical minds.

A team from IBM may have cracked the code with NorthPole, a fully digital chip that mimics the brain’s structure and efficiency. When pitted against state-of-the-art graphics processing units (GPUs)—the chips most commonly used to run AI programs—IBM’s brain-like chip triumphed in several standard tests, while using up to 96 percent less energy.

IBM is no stranger to brain-inspired chips. From TrueNorth to SpiNNaker, they’ve spent a decade tapping into the brain’s architecture to better run AI algorithms.

Project to project, the goal has been the same: How can we build faster, more energy efficient chips that allow smaller devices—like our phones or computers in self-driving cars—to run AI on the “edge.” Edge computing can monitor and respond to problems in real-time without needing to send requests to remote server farms in the cloud. Like switching from dial-up modems to fiber-optic internet, these chips could also speed up large AI models with minimal energy costs.

The problem? The brain is analog. Traditional computer chips, in contrast, use digital processing—0s and 1s. If you’ve ever tried to convert an old VHS tape into a digital file, you’ll know it’s not a straightforward process. So far, most chips that mimic the brain use analog computing. Unfortunately, these systems are noisy and errors can easily slip through.

With NorthPole, IBM went completely digital. Tightly packing 22 billion transistors onto 256 cores, the chip takes its cues from the brain by placing computing and memory modules next to each other. Faced with a task, each core takes on a part of a problem. However, like nerve fibers in the brain, long-range connections link modules, so they can exchange information too.

This sharing is an “innovation,” said Drs. Subramanian Iyer and Vwani Roychowdhury at the University of California, Los Angeles (UCLA), who were not involved in the study.

The chip is especially relevant in light of increasingly costly, power-hungry AI models. Because NorthPole is fully digital, it also dovetails with existing manufacturing processes—the packaging of transistors and wired connections—potentially making it easier to produce at scale.

The chip represents “neural inference at the frontier of energy, space and time,” the authors wrote in their paper, published in Science.

Mind Versus Machine

From DALL-E to ChatGTP, generative AI has taken the world by storm with its shockingly human-like text-based responses and images.

But to study author Dr. Dharmendra S. Modha, generative AI is on an unsustainable path. The software is trained on billions of examples—often scraped from the web—to generate responses. Both creating the algorithms and running them requires massive amounts of computing power, resulting in high costs, processing delays, and a large carbon footprint.

These popular AI models are loosely inspired by the brain’s inner workings. But they don’t mesh well with our current computers. The brain processes and stores memories in the same location. Computers, in contrast, divide memory and processing into separate blocks. This setup shuttles data back and forth for each computation, and traffic can stack up, causing bottlenecks, delays, and wasted energy.

It’s a “data movement crisis,” wrote the team. We need “dramatically more computationally-efficient methods.”

One idea is to build analog computing chips similar to how the brain functions. Rather than processing data using a system of discrete 0s and 1s—like on-or-off light switches—these chips function more like light dimmers. Because each computing “node” can capture multiple states, this type of computing is faster and more energy efficient.

Unfortunately, analog chips also suffer from errors and noise. Similar to adjusting a switch with a light dimmer, even a slight mistake can alter the output. Although flexible and energy efficient, the chips are difficult to work with when processing large AI models.

A Match Made in Heaven

What if we combined the flexibility of neurons with the reliability of digital processors?

That’s the driving concept for NorthPole. The result is a stamp-sized chip that can beat the best GPUs in several standard tests.

The team’s first step was to distribute data processing across multiple cores, while keeping memory and computing modules inside each core physically close.

Previous analog chips, like IBM’s TrueNorth, used a special material to combine computation and memory in one location. Instead of going analog with non-standard materials, the NorthPole chip places standard memory and processing components next to each other.

The rest of NorthPole’s design borrows from the brain’s larger organization.

The chip has a distributed array of cores like the cortex, the outermost layer of the brain responsible for sensing, reasoning, and decision-making. Each part of the cortex processes different types of information, but it also shares computations and broadcasts results throughout the region.

Inspired by these communication channels, the team built two networks on the chip to democratize memory. Like neurons in the cortex, each core can access computations within itself, but also has access to a global memory. This setup removes hierarchy in data processing, allowing all cores to tackle a problem simultaneously while also sharing their results—thereby eliminating a common bottleneck in computation.

The team also developed software that cleverly delegates a problem in both space and time to each core—making sure no computing resources go to waste or collide with each other.

The software “exploits the full capabilities of the [chip’s] architecture,” they explained in the paper, while helping integrate “existing applications and workflows” into the chip.

Compared to TrueNorth, IBM’s previous brain-inspired analog chip, NorthPole can support AI models that are 640 times larger, involving 3,000 times more computations. All that with just four times the number of transistors.

A Digital Brain Processor

The team next pitted NorthPole against several GPU chips in a series of performance tests.

NorthPole was 25 times more efficient when challenged with the same problem. The chip also processed data at lighting-fast speeds compared to GPUs on two difficult AI benchmark tests.

Based on initial tests, NorthPole is already usable for real-time facial recognition or deciphering language. In theory, its fast response time could also guide self-driving cars in split-second decisions.

Computer chips are at a crossroads. Some experts believe that Moore’s law—which posits that the number of transistors on a chip doubles every two years—is at death’s door. Although still in their infancy, alternative computing structures, such as brain-like hardware and quantum computing, are gaining steam.

But NorthPole shows semiconductor technology still has much to give. Currently, there are 37 million transistors per square millimeter on the chip. But based on projections, the setup could easily expand to two billion, allowing larger algorithms to run on a single chip.

“Architecture trumps Moore’s law,” wrote the team.

They believe innovation in chip design, like NorthPole, could provide near-term solutions in the development of increasingly powerful but resource-hungry AI.

Image Credit: IBM

Could Having Robot Coworkers Make Us Lazier? Yep, Pretty Much, Study Says

0

It’s not uncommon for people to take their foot off the pedal at work if they know others will cover for them. And it turns out, the same might be true when people think robots have got their backs.

While robots have been a fixture in the workplace for decades, they’ve typically taken the form of heavy machinery that workers should steer well clear of. But in recent years, with advances in AI, there have been efforts to build collaborative robots that work alongside humans as teammates and partners.

Being able to share a workspace and cooperate with humans could allow robots to assist in a far wider range of tasks and augment human workers to boost their productivity. But it’s still far from clear how the dynamics of human-robot teams would play out in reality.

New research in Frontiers in Robotics and AI suggests there could be potential downsides if the technology isn’t deployed thoughtfully. The researchers found that when humans were asked to spot defects in electronic components, they did a worse job when they thought a robot had already checked a piece.

“Teamwork is a mixed blessing,” first author Dietlind Helene Cymek, from the Technical University of Berlin in Germany, said in a press release. “Working together can motivate people to perform well, but it can also lead to a loss of motivation because the individual contribution is not as visible. We were interested in whether we could also find such motivational effects when the team partner is a robot.”

The phenomenon the researchers uncovered is already well-known among humans. Social loafing, as it is known, has been extensively studied by psychologists and refers to an individual putting less effort into a task performed as a team compared to one performed alone.

This often manifests when it’s hard to identify individual contributions to a shared task, say the researchers, which can lead to a lack of motivation. Having a high performing co-worker can also make it more likely.

To see if the phenomenon could also impact teams of robots and humans, the researchers set up a simulated quality assurance task in which volunteers were asked to check images of circuit boards for defects. To measure how the humans were inspecting the boards, the images were blurred out and only became clear in areas where the participants hovered their mouse cursor.

Of the 42 people who took part in the trial, half worked alone, and the other half were told that a robot had already checked the images they were seeing. For the second group, each image featured red check marks where the robot had spotted problems, but crucially, it had missed five defects. Afterwards the participants were asked to rate themselves on how they performed, their effort, and how responsible for the task they felt.

The researchers found that both groups spent more or less the same amount of time inspecting the boards, covered the same areas, and their self-perception of how they’d done was similar. However, the group that worked in tandem with the robot only spotted an average of 3.3 of the 5 defects missed by the machine, while the other group caught 4.23 on average.

The researchers say this suggests that those working with the robot were less attentive when they were checking the circuit boards. They speculate that this could be because they subconsciously assumed the robot wouldn’t have missed any defects.

While the effect was not hugely pronounced, the researchers point out that in their study participants knew they were being watched and evaluated and the tests were relatively short and simple.

“In longer shifts, when tasks are routine and the working environment offers little performance monitoring and feedback, the loss of motivation tends to be much greater,” said Dr Linda Onnasch, senior author of the study.

While the research was focused on human-robot collaboration, it’s not a stretch to imagine that similar dynamics could play out with other kinds of AI assistants. While chatbots are beginning to provide links to sources, many humans may not bother to check them, and there are growing concerns that people are becoming over-reliant on AI writing tools.

With machines able to assist us on a growing number of daily tasks, it will be important to make sure they’re helping augment our capabilities—and not simply letting us slack off.

Image Credit: Jem Sahagun / Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through October 21)

0

ARTIFICIAL INTELLIGENCE

Minds of Machines: The Great AI Consciousness Conundrum
Grace Huckins | MIT Technology Review
“AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make.”

TECH

OpenAI in Talks for Deal That Would Value Company at $80 Billion
Cade Metz | The New York Times
OpenAI is in talks to complete a deal that would value the company at $80 billion or more, nearly triple its valuation less than six months ago, according to a person with knowledge of the discussions. …OpenAI declined to comment. Nearly a year after OpenAI sparked an AI boom with the release of the online chatbot ChatGPT, the Silicon Valley deal-making machine continues to pump money into the field’s leading companies.”

ROBOTICS

Figure Unveils Its Humanoid Robot Prototype
Evan Ackerman | IEEE Spectrum
“When Figure announced earlier this year that it was working on a general-purpose humanoid robot, our excitement was tempered somewhat by the fact that the company didn’t have much to show besides renderings of the robot that it hoped to eventually build. …As it turns out, the company progressed pretty darn fast, and today Figure is unveiling its Figure 01 robot, which has gone from nothing at all to dynamic walking in under a year.”

ETHICS

We Don’t Actually Know If AI Is Taking Over Everything
Karen Hao | The Atlantic
“More and more of this technology, once developed through open research, has become almost completely hidden within corporations that are opaque about what their AI models are capable of and how they are made. …Now we have a way to measure just how bad AI’s secrecy problem actually is. Yesterday,  Stanford University’s Center for Research on Foundation Models launched a new index that tracks the transparency of 10 major AI companies, including OpenAI, Google, and Anthropic.”

COMPUTING

Thirty Years Later, a Speed Boost for Quantum Factoring
Ben Brubaker | Quanta
Shor’s algorithm will enable future quantum computers to factor large numbers quickly, undermining many online security protocols. Now a researcher has shown how to do it even faster. …The broader lesson of [Oded] Regev’s new algorithm, beyond the implications for factoring, is that quantum computing researchers should always be open to surprises, even in problems that have been studied for decades. ‘This variant of my algorithm was undiscovered for 30 years and came out of the blue,’ Shor said. ‘There’s still probably lots of other quantum algorithms to be found.'”

ARTIFICIAL INTELLIGENCE

AI Chatbots Can Guess Your Personal Information From What You Type
Will Knight | Wired
The way you talk can reveal a lot about you—especially if you’re talking to a chatbot. …[Computer science professor Martin Vechev] and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.”

FUTURE

What if We Could All Control AI?
Kevin Roose | The New York Times
Should AI be governed by a handful of companies that try their best to make their systems as safe and harmless as possible? Should regulators and politicians step in and build their own guardrails? Or should AI models be made open-source and given away freely, so users and developers can choose their own rules? A new experiment by Anthropic, the maker of the chatbot Claude, offers a quirky middle path: What if an AI company let a group of ordinary citizens write some rules, and trained a chatbot to follow them?”

ROBOTICS

Amazon Plans to Deploy Delivery Drones in the UK and Italy Next Year
Amrita Khalid | The Verge
“Despite running into obstacles in the US, Amazon is planning to expand its Prime Air drone delivery program to two additional countries. Amazon announced today that it will soon make drone delivery available for Prime members in Italy and the United Kingdom—in addition to expanding to one more yet-to-be-named US city.”

Image Credit: Douglas Sanchez / Unsplash

Scientists Pump Up Lab-Grown Muscles for Robots With a New Magnetic Workout

0

As I’m typing these words, I don’t think about the synchronized muscle contractions that allow my fingers to dance across the keyboard. Or the back muscles that unconsciously tighten to hold myself upright while sitting on a spongy cushion.

It’s easy to take our muscles for granted. But under the hood, muscle cells perfectly align to build fibers—interwoven with blood vessels and nerves—into a biological machine that lets us move about in our daily lives without a second thought.

Unfortunately, these precise cell arrangements are also why artificial muscles are difficult to recreate in the lab. Despite being soft, squishy, and easily damaged, our muscles can perform incredible feats—adapt to heavy loads, sense the outside world, and rebuild after injury. A main reason for these superpowers is alignment—that is, how muscle cells orient to form stretchy fibers.

Now, a new study suggests that the solution to growing better lab-grown muscles may be magnets. Led by Dr. Ritu Raman at the Massachusetts Institute of Technology (MIT), scientists developed a magnetic hydrogel “sandwich” that controls muscle cell orientation in a lab dish. By changing the position of the magnets, the muscle cells aligned into fibers that contracted in synchrony as if they were inside a body.

The whole endeavor sounds rather Frankenstein. But lab-grown tissues could one day be grafted into people with heavily damaged muscles—either from inherited diseases or traumatic injuries—and restore their ability to navigate the world freely. Synthetic muscles could also coat robots, providing them with human-like senses, flexible motor control, and the ability to heal after inevitable scratches and scrapes.

Raman’s work takes a step in that direction. Her team built a biomanufacturing platform focused on replicating the mechanical forces between muscle cells and their environment, a relationship essential for the cells to organize into tissues. But it’s not just about mimicking physical forces—stretching, pulling, or twisting. Rather, the platform also takes into account how mechanical movements alter communications between cells directing them to align.

Along with a custom algorithm, the platform essentially turned cells into a living, functional biomaterial that self-organizes and responds to pushes and pulls. In turn, the system could also shed light on muscle cells’ remarkable ability to adapt, align, and regenerate.

“The ability to make aligned muscle in a lab setting means that we can develop model tissues for understanding muscle in healthy and diseased states and for developing and testing new therapies that combat muscle injury or disease,” Raman said in a press briefing.

Muscling Through

Raman has long sought to use living cells as an adaptive biomanufacturing material.

Over a half decade ago, she engineered tiny 3D-printed cyborg bots with genetically-altered muscle cells that responded to light. Like moths to a flame, the bio-bots followed beams of light. Surprisingly, like well-trained athletes, the bots’ engineered muscles became more flexible as they exercised, allowing them to steer and rotate through different challenges.

The results made the team wonder: For lab-grown muscles to fully function, do we need to “exercise” the tissues?

The answer is seemingly yes. In a study last week, the team expanded upon their bio-bot results with light-activated muscle grafts to restore a large muscle injury in the hind legs of mice.

Muscle grafts are nothing new, but they need to integrate into their hosts, which is often difficult. Here, mice with Raman’s grafts completely recovered mobility in just two weeks.

The team shined beams of blue light on the muscle grafts daily as an exercise regime to strengthen them after implantation in the host. The workout didn’t just keep the grafts alive, it also spurred them to grow blood vessels and nerve cells that connected with the host body.  With just half an hour a day for 10 days, the treatment boosted muscle force by three times.

“Exercising muscle grafts after they’ve implanted does more than just make muscle stronger, it also appears to affect how muscle communicates with other tissue, like blood vessels and nerves,” said Raman.

Yet a question remained: Can you make muscle fibers stronger with exercise in a petri dish outside the body?

Magnetic Workout

It’s not a crazy idea. Raman’s previous work found that zapping muscle fibers with electrical bursts for 30 minutes a day made the fibers stronger after just 10 days.

Muscles are wired to respond to electrical signals from the brain. However, they’re also sensitive to mechanical forces, which are hard to replicate in the lab.

“Generally, when people want to mechanically stimulate tissues in a lab environment, they grasp the tissue at both ends and move it back and forth, stretching and compressing the whole tissue,” Raman said in the briefing. “But this doesn’t really mimic how cells talk to each other in our bodies.”

Enter MagMA. Short for magnetic matrix actuation (yeah, it’s a mouthful), the system mimics the gel-like structure surrounding muscle cells. It’s basically a sandwich: The inner layer is a magnetic silicone “filling” embedded with iron microparticles. The two “bread slices” consist of a hydrogel made with ingredients from a natural protein.

Added to the bottom of a petri dish, MagMA functions like the gel-like matrix that muscle cells interact with—but with a magnetic boost.

The team wanted to know if, like electricity and light, magnets can also be used to exercise muscles and encourage them to form fibers.

They grew a layer of muscle cells in a MagMA-coated petri dish. By sliding a magnet across the petri dish, they moved the magnetic particles in the hydrogel, which in turn mechanically flexed the cells. The system is highly controllable: changing the magnet’s path alters the magnitude and direction of the mechanical forces the cells experience, in turn changing their orientation.

Overall, the team found the muscle cells rapidly aligned into neatly bundled muscle fibers. Although the fibers weren’t stronger, they were more organized. In contrast, cells grown in standard dishes grew chaotically, forming muscle fibers at odds with their neighbors’ orientation.

The difference in structure altered the muscle’s function: muscle cells programmed with MagMA hydrogels twitched together in synchrony—a critical need when engineering functional muscles. Those grown without MagMA also moved, but with each fiber twitching to its own tune, resulting in a sort of spasm.

“We were very surprised by the findings of our study,” said Raman. “This confirmed our understanding that the form and function of muscle are intrinsically connected, and that controlling form can help us control function.”

The magnetic platform is still in its early days. The team is now looking to optimize magnet strength and other parameters to best stimulate muscle growth and function. They’re also expanding the platform to other cell types in tissue engineering and tailoring the hydrogel component with other materials for specific biomanufacturing needs.

The team is certain that building muscles is just the first step. We think “this platform will drive a broad range of fundamental and translational studies of mechanobiology, regenerative medicine, and biohybrid robotics,” they wrote in the study.

Image Credit: Ella Marushchenko

Quantum Computers in 2023: Where They Are Now and What’s Next

0

In June, an IBM computing executive claimed quantum computers were entering the “utility” phase, in which high-tech experimental devices become useful. In September, Australia’s chief scientist Cathy Foley went so far as to declare “the dawn of the quantum era.”

This week, Australian physicist Michelle Simmons won the nation’s top science award for her work on developing silicon-based quantum computers.

Obviously, quantum computers are having a moment. But—to step back a little—what exactly are they?

What Is a Quantum Computer?

One way to think about computers is in terms of the kinds of numbers they work with.

The digital computers we use every day rely on whole numbers (or integers), representing information as strings of zeroes and ones which they rearrange according to complicated rules. There are also analog computers, which represent information as continuously varying numbers (or real numbers), manipulated via electrical circuits or spinning rotors or moving fluids.

In the 16th century, the Italian mathematician Girolamo Cardano invented another kind of number called complex numbers to solve seemingly impossible tasks such as finding the square root of a negative number. In the 20th century, with the advent of quantum physics, it turned out complex numbers also naturally describe the fine details of light and matter.

In the 1990s, physics and computer science collided when it was discovered that some problems could be solved much faster with algorithms that work directly with complex numbers as encoded in quantum physics.

The next logical step was to build devices that work with light and matter to do those calculations for us automatically. This was the birth of quantum computing.

Why Does Quantum Computing Matter?

We usually think of the things our computers do in terms that mean something to us— balance my spreadsheet, transmit my live video, find my ride to the airport. However, all of these are ultimately computational problems, phrased in mathematical language.

As quantum computing is still a nascent field, most of the problems we know quantum computers will solve are phrased in abstract mathematics. Some of these will have “real world” applications we can’t yet foresee, but others will find a more immediate impact.

One early application will be cryptography. Quantum computers will be able to crack today’s internet encryption algorithms, so we will need quantum-resistant cryptographic technology. Provably secure cryptography and a fully quantum internet would use quantum computing technology.

A microscopic view of a square, iridescent computer chip against an orange background.
Google has claimed its Sycamore quantum processor can outperform classical computers at certain tasks. Image Credit: Google

In materials science, quantum computers will be able to simulate molecular structures at the atomic scale, making it faster and easier to discover new and interesting materials. This may have significant applications in batteries, pharmaceuticals, fertilizers, and other chemistry-based domains.

Quantum computers will also speed up many difficult optimization problems, where we want to find the “best” way to do something. This will allow us to tackle larger-scale problems in areas such as logistics, finance, and weather forecasting.

Machine learning is another area where quantum computers may accelerate progress. This could happen indirectly, by speeding up subroutines in digital computers, or directly if quantum computers can be reimagined as learning machines.

What Is the Current Landscape?

In 2023, quantum computing is moving out of the basement laboratories of university physics departments and into industrial research and development facilities. The move is backed by the checkbooks of multinational corporations and venture capitalists.

Contemporary quantum computing prototypes—built by IBM, Google, IonQ, Rigetti, and others—are still some way from perfection.

Today’s machines are of modest size and susceptible to errors, in what has been called the “noisy intermediate-scale quantum” phase of development. The delicate nature of tiny quantum systems means they are prone to many sources of error, and correcting these errors is a major technical hurdle.

The holy grail is a large-scale quantum computer which can correct its own errors. A whole ecosystem of research factions and commercial enterprises are pursuing this goal via diverse technological approaches.

Superconductors, Ions, Silicon, Photons

The current leading approach uses loops of electric current inside superconducting circuits to store and manipulate information. This is the technology adopted by Google, IBM, Rigetti, and others.

Another method, the “trapped ion” technology, works with groups of electrically charged atomic particles, using the inherent stability of the particles to reduce errors. This approach has been spearheaded by IonQ and Honeywell.

Illustration showing glowing dots and patterns of light.
An artist’s impression of a semiconductor-based quantum computer. Image Credit: Silicon Quantum Computing

A third route of exploration is to confine electrons within tiny particles of semiconductor material, which could then be melded into the well-established silicon technology of classical computing. Silicon Quantum Computing is pursuing this angle.

Yet another direction is to use individual particles of light (photons), which can be manipulated with high fidelity. A company called PsiQuantum is designing intricate “guided light” circuits to perform quantum computations.

There is no clear winner yet from among these technologies, and it may well be a hybrid approach that ultimately prevails.

Where Will the Quantum Future Take Us?

Attempting to forecast the future of quantum computing today is akin to predicting flying cars and ending up with cameras in our phones instead. Nevertheless, there are a few milestones that many researchers would agree are likely to be reached in the next decade.

Better error correction is a big one. We expect to see a transition from the era of noisy devices to small devices that can sustain computation through active error correction.

Another is the advent of post-quantum cryptography. This means the establishment and adoption of cryptographic standards that can’t easily be broken by quantum computers.

Commercial spin-offs of technology such as quantum sensing are also on the horizon.

The demonstration of a genuine “quantum advantage” will also be a likely development. This means a compelling application where a quantum device is unarguably superior to the digital alternative.

And a stretch goal for the coming decade is the creation of a large-scale quantum computer free of errors (with active error correction).

When this has been achieved, we can be confident the 21st century will be the “quantum era.”

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: A complex cooling rig is needed to maintain the ultracold working temperatures required by a superconducting quantum computer / IBM

Amazon Robots Take Over Warehouses to Get That Thing You Ordered Even Faster

0

It’s no secret Amazon has been neck-deep in robotics for years. It began with the acquisition of robotics startup Kiva in 2012. Since then, Amazon’s automation efforts have expanded to include a menagerie of over 750,000 warehouse robots for fetching, picking, and sorting.

Now the company is stringing these robots together into a kind of assembly line for the billions of packages it ships every year. According to Amazon, it launched a new system, called Sequoia, in a Houston warehouse this week.

In the new process, robots pull totes from shelves and bring them over to a robotic arm equipped with computer vision and machine learning. The arm sorts the totes and sends them off to a worker for picking and packing. Another robotic arm then consolidates the remaining items for storage.

According to Amazon, Sequoia shaves 25 percent off the time it takes to fulfill orders and accelerates item identification and storage by 75 percent. The company aims to expand the system to many more of its warehouses in the coming years.

To be clear, the robots themselves aren’t necessarily new. Kiva’s wheeled robots started fetching towers of totes years ago, and in 2022, Amazon unveiled several new robots, including updated Kiva-style movers (Proteus) and clever robotic arms (Sparrow). What’s new is the way in which these robots are being strung together into a comprehensive system that’s ready to take center stage in daily operations.

“The key thing Amazon is trying to do is integrate,” Rueben Scriven, research manager at market research firm Interact Analysis, told the Wall Street Journal. “They have the different pieces, and now it’s about, ‘How do we bring them together in a harmonious system?'”

Whereas blue-sky projects into sci-fi-like general-purpose robots—including the likes of Tesla, Sanctuary, and Figure (who this week unveiled a much-improved prototype)—are being pursued to fulfill nebulous future applications, Amazon’s robotics efforts are rooted in hard-nosed economics. They make financial sense for the company, and the company has ample cash to invest in their development. If the earliest wave of robotics automation hit manufacturing, Amazon’s warehouse work shows a new wave is well underway. Amazon competitor, Walmart, is likewise rapidly automating its warehouses.

In contrast to manufacturing, in which robots perform highly repetitive, precisely dictated actions, warehouse automation is a harder problem. Working on a warehouse floor means dodging humans and other machines and navigating a more open environment. Identifying products of all shapes and sizes and picking the exact thing a customer ordered requires the ability to see and “understand” what is being seen.

Most of this work was out of range for robots until recently. But Amazon has been chipping away at the problem, and isn’t likely to end those efforts.

The company will also begin testing the humanoid Digit robot in warehouses soon. Digit’s maker, Agility Robotics, in whom Amazon has invested, announced this month it plans to open a new plant later in the year to manufacture hundreds and eventually thousands of its robots. Beyond the warehouse, Amazon is interested in self-driving vehicles, drone delivery, and would likely not be against Digit-like robots dropping packages on your doorstep.

Despite continued worries about robots taking human jobs and increasing injuries as the pace of the work accelerates, the company claims automation benefits customers and employees alike. It says its human workforce has grown alongside automation, and the vision remains one of robots and humans working together, not the former replacing the latter. Sequoia should improve safety overall, according to the company. Workers won’t have to reach as high for heavy totes as they have in the past, for instance, reducing injuries. It’s still an unsettled debate, of course, and it will take time to see how things shake out.

Still, with double-digit gains in speed and efficiency, Amazon isn’t likely to ease back on automation anytime soon. The next time you order a last-minute gift and find it can be delivered same-day, you can thank Amazon’s tireless human and robot workers for the favor.

Image Credit: Amazon

Cancer-Killing Duo Hunts Down and Destroys Tumors With Surprising Alacrity

0

Bacteria may seem like a strange ally in the battle against cancer.

But in a new study, genetically engineered bacteria were part of a tag-team therapy to shrink tumors. In mice with blood, breast, or colon cancer, the bacteria acted as homing beacons for their partners—modified T cells—as the two sought out and destroyed tumor cells.

CAR T—the name for therapies using these cancer-destroying T cells—is a transformative approach. First approved by the US Food and Drug Administration (FDA) for a type of deadly leukemia in 2017, there are now six treatments available for multiple types of blood cancers.

Dubbed a “living drug” by pioneer researcher Dr. Carl June at University of Pennsylvania, CAR T is beginning to take on autoimmune diseases, heart injuries, and liver problems. It is also poised to wipe out senescent “zombie cells” linked to age-related diseases and fight off HIV and other viral infections.

Despite its promise, however, CAR T falters when pitted against solid tumors—which make up roughly 90 percent of all cancers.

“Each type of tumor has its own little ways of evading the immune system,” said June previously in Penn Medicine News. “So there won’t be one silver-bullet CAR T therapy that targets all types of tumors.”

Surprisingly, bacteria may cause June to reconsider—the new approach has potential as a universal treatment for all sorts of solid tumors. When given to mice, the engineered bugs dug deep into the cores of tumors and readily secreted a synthetic “tag” to draw in nearby CAR T soldiers. The molecular tag only sticks to the regions immediately surrounding a tumor and spares healthy cells from CAR T attacks.

The engineered bacteria could also, in theory, infiltrate other types of solid tumors, including “sneaky” ones difficult to target with conventional therapies. Together, the new method called ProCAR—probiotic-guided CAR T cells—combines bacteria and T cells into a cancer-fighting powerhouse.

It showcases “the utility of engineered bacteria as a new enhancement to CAR T cell therapy,” said Eric Bressler and Dr. Wilson Wong at Boston University, who were not involved in the study.

Double Tap

Hang on, what’s CAR T again?

In a nutshell, CAR T therapies use T cells that have been genetically engineered to boost their existing abilities. T cells are already natural born killers that hunt down and destroy viruses, bacteria, and cancers inside our body. They use cellular “claws” to grab onto special proteins on the surfaces of target cells—called antigens—without damaging nearby healthy cells.

But cancer cells are tricky foes. Their antigens rapidly mutate to avoid T cell surveillance and attacks. CAR T therapy overrides this defense by engineering T cells to better seek and destroy their targets.

The process usually goes like this. T cells are extracted in a blood draw. Scientists then insert genes into the cells to make a new protein “claw” to grab onto a specific antigen. These engineered cells are infused back into the patient’s body where they hunt down that antigen and destroy the target cell. Recent work is also exploring directly editing T cells inside the body.

CAR T has done wonders for previously untreatable blood cancers. But solid tumors are a different story.

A big problem is targeting. Many blood cancers have a universal antigen that signals “I’m cancerous,” making it relatively easy to engineer CAR T cells to find them.

Solid tumors, in contrast, have a wide variety of antigens—many of which are also present in normal tissues—lowering CAR T cell efficiency and increasing the chances of deadly side effects. Even worse, cancer cells pump out glue-like proteins that build a protective shield around cancers. Called the tumor microenvironment, the barrier is highly toxic to CAR T cells. Its low oxygen levels readily destroy the membranes of CAR T cells. Like popped balloons, the cells spill their contents into surrounding areas, in turn driving inflammation.

What can survive this tumor wasteland? Bacteria.

A Universal Antigen

The new study transformed bacteria into Trojan horses that can, in theory, infiltrate any solid tumor. The chosen bacteria, a strain of E. coli, is already used to ease gastrointestinal and metabolic issues. They’re easy to genetically reprogram and can release biological payloads into the cores of tumors, making them perfect candidates for “tagging” cancers in CAR T.

To design the tags, the team engineered a protein antigen that can anchor itself to components in the tumor and glows fluorescent green. Tumors coated in this designer antigen make them easy to spot and vulnerable to CAR T cells designed to destroy them.

The team then genetically reprogrammed bacteria to release their antigen payload once they reached the tumor microenvironment.

In a proof of concept, the tag-team system reduced cancer growth and increased survival in mice with an aggressive blood cancer. Treatments using probiotics with a nonfunctional tag didn’t help. Treated mice happily went about their day and maintained a healthy body weight as their tumors shrank. The engineered bacteria lingered near the tumors for at least two weeks.

Further tests in mice with colon cancer showed similarly positive outcomes. A dose of bacteria followed by two doses of CAR T cells reduced tumor size four-fold 22 days after treatment.

Another Leg Up

The system worked, but the team wasn’t satisfied. The amount of antigen produced depends on bacterial growth, causing the tag’s efficiency to ebb and flow with the bacterial population.

To give the system a boost, the team added another genetic circuit into the bacteria, allowing them to release a chemical that attracts CAR T cells. The improved method reduced tumors in mice with breast cancer after two shots into the bloodstream.

“Combining the advantages of tumor-homing bacteria and CAR-T cells provides a new strategy for tumor recognition, and this builds the foundation for engineered communities of living therapies,” said study author Rosa Vincent, at Columbia University.

The strategy could be especially powerful in tumors without obvious antigens. However, scaling up the strategy will take some effort. Cancers in humans are roughly 0.8 inches in diameter—about three-fourths of a quarter.

While a low estimate for multiple types of cancers, that’s still “20- to 40-fold larger than the mouse tumors in this study,” said Bressier and Wong. Further studies will have to explore how well the synthetic antigen diffuses in increasingly large cancers.

Safety is another concern. Compared to mice, humans are more sensitive to potential toxins produced by bacteria. Based on previous clinical trials with engineered bacteria, the solution may be more genetic engineering to dampen toxin-related genes.

“While we’re still in the research phase,” the results “could open up new avenues for cancer therapy,” said study author Dr. Tal Danino.

Image Credit: Colorized scanning electron microscope image of a T cell / NIAID

Could Powering AI Gobble Up as Much Energy as a Small Country?

0

As companies race to build AI into their products, there are concerns about the technology’s potential energy use. A new analysis suggests AI could match the energy budgets of entire countries, but the estimates come with some notable caveats.

Both training and serving AI models requires huge data centers running many thousands of cutting-edge chips. This uses considerable amounts of energy, for powering the calculations themselves and supporting the massive cooling infrastructure required to keep the chips from melting.

With excitement around generative AI at fever pitch and companies aiming to build the technology into all kinds of products, some are sounding the alarm about what this could mean for future energy consumption. Now, energy researcher Alex de Vries, who made headlines for his estimates of Bitcoin’s energy use, has turned his attention to AI.

In a paper published in Joule, he estimates that in the worst-case scenario Google’s AI use alone could match the total energy consumption of Ireland. And by 2027, he says global AI usage could account for 85 to 134 terawatt-hours annually, which is comparable to countries like the Netherlands, Argentina, and Sweden.

“Looking at the growing demand for AI service, it’s very likely that energy consumption related to AI will significantly increase in the coming years,” de Vries, who is now a PhD candidate at Vrije Universiteit Amsterdam, said in a press release.

“The potential growth highlights that we need to be very mindful about what we use AI for. It’s energy intensive, so we don’t want to put it in all kinds of things where we don’t actually need it.”

There are some significant caveats to de Vries’ headline numbers. The Google prediction is based on suggestions by the company’s executives that they could build AI into their search engine combined with some fairly rough power consumption estimates from research firm SemiAnalysis.

The analysts at SemiAnalysis suggest that applying AI similar to ChatGPT in each of Google’s nine billion daily searches would take roughly 500,000 of Nvidia’s specialized A100 HGX servers. Each of these servers requires 6.5 kilowatts to run, which combined would total a daily electricity consumption of 80 gigawatt-hours and 29.2 terawatt-hours a year, according to the paper.

Google is unlikely to reach these levels though, de Vries admits, because such rapid adoption is unlikely, the enormous costs would eat into profits, and Nvidia doesn’t have the ability to ship that many AI servers. So, he did another calculation based on Nvidia’s total projected server production by 2027 when a new chip plant will be up and running, allowing it to produce 1.5 million of its servers annually. Given a similar energy consumption profile, these could be consuming 85 to 134 terawatt-hours a year, he estimates.

It’s Important to remember though, that all these calculations also assume 100 percent usage of the chips, which de Vries admits is probably not realistic. They also ignore any potential energy efficiency improvements in either AI models or the hardware used to run them.

And this kind of simplistic analysis can be misleading. Jonathan Koomey, an energy economist who has previously criticized de Vries’ approach to estimating Bitcoin’s energy, told Wired in 2020—when the energy use of AI was also in the headlines—that “eye popping” numbers about the energy use of AI extrapolated from isolated anecdotes are likely to be overestimates.

Nonetheless, while the numbers might be over the top, the research highlights an issue people should be conscious of. In his paper, de Vries points to Jevons’ Paradox, which suggests that increasing efficiency often results in increased demand. So even if AI becomes more efficient, its overall power consumption could still rise considerably.

While it’s unlikely that AI will be burning through as much power as entire countries anytime soon, its contribution to energy usage and consequent carbon emissions could be significant.

Image Credit: AshrafChemban / Pixabay

This Week’s Awesome Tech Stories From Around the Web (Through October 14)

0

SCIENCE

Scientists Just Drafted an Incredibly Detailed Map of the Human Brain
Cassandra Willyard | MIT Technology Review
“Some brain atlases already exist, but this new suite of papers provides unprecedented resolution of the whole brain for humans and non-human primates. The human brain atlas includes the location and function of more than 3,000 cell types in adult and developing individuals. ‘This is far and away the most complete description of the human brain at this kind of level, and the first description in many brain regions,’ Lein says. But it’s still a first draft.”

BIOTECH

First-Ever Gene Therapy Trial to Cure Form of Deafness Begins
Sarah Neville | Financial Times (via Ars Technica)
Up to 18 children from the UK, Spain, and the US are being recruited to the study, which aims to transform treatment of auditory neuropathy, a condition caused by the disruption of nerve impulses traveling from the inner ear to the brain. …Gene therapies now [hold] remarkable promise to restore hearing, [Professor Manohar Bance, an ear surgeon at Cambridge University Hospitals NHS Foundation Trust] suggested. ‘It’s the dawn of a new era,’ he added.”

TECH

Big Tech Struggles to Turn AI Hype Into Profits
Tom Dotan and Deepa Seetharaman | The Wall Street Journal
AI often doesn’t have the economies of scale of standard software because it can require intense new calculations for each query. The more customers use the products, the more expensive it is to cover the infrastructure bills. These running costs expose companies charging flat fees for AI to potential losses.”

COMPUTING

2D Transistors, 3D Chips, and More Mad Stuff
Samuel K. Moore | IEEE Spectrum
“Because chip companies can’t keep on increasing transistor density by scaling down chip features in two dimensions, they have moved into the third dimension by stacking chips on top of each other. Now they’re working to build transistors on top of each other within those chips. Next, it appears likely, they will squeeze still more into the third dimension by designing 3D circuits with 2D semiconductors, such as molybdenum disulfide.”

ROBOTICS

Woman’s Experimental Bionic Hand Passes Major Test With Flying Colors
Ed Cara | Gizmodo
Much like a real flesh-and-blood hand, it’s controlled by Karin’s nervous system and provides sensory feedback. Her new hand can purportedly perform around 80% of the typical daily tasks that a regular limb would be able to do. And it’s substantially reduced her phantom limb pain and the need for medication.”

ARTIFICIAL INTELLIGENCE

The Chatbots Are Now Talking to Each Other
Will Knight | Wired
“The company [Fantasy] gives each agent dozens of characteristics drawn from ethnographic research on real people, feeding them into commercial large language models like OpenAI’s GPT and Anthropic’s Claude. Its agents can also be set up to have knowledge of existing product lines or businesses, so they can converse about a client’s offerings. Fantasy then creates focus groups of both synthetic humans and real people.”

ROBOTICS

How Disney Packed Big Emotion Into a Little Robot
Evan Ackerman | IEEE Spectrum
“The adorable robot packs an enormous amount of expression into its child-size body, from its highly expressive head and two wiggly antennae to its stubby little legs. But what sets this robot apart from other small bipeds is how it walks—it’s full of personality, emoting as it moves in a way that makes it seem uniquely alive.”

SPACE

Scientists Reveal Plan to Use Lasers to Build Roads on the Moon
Andrew Griffin | The Independent
In the new study, scientists examined whether lunar soil could be turned into something more substantial by using lasers. And they had some success, finding that lunar dust can be melted down into a solid substance. …The best [approach] used a 45 millimeter diameter laser beam to make hollow triangular shapes that were about 250 millimeters in size. Those pieces could be locked together to create solid surfaces that could be placed across the Moon’s surface, they suggest, and then used as roads and landing pads.”

ARTIFICIAL INTELLIGENCE

Uh-oh! Fine-Tuning LLMs Compromises Their Safety, Study Finds
Ben Dickson | VentureBeat
As the rapid evolution of large language models (LLM) continues, businesses are increasingly interested in ‘fine-tuning’ these models for bespoke applications… However, a recent study by Princeton University, Virginia Tech, and IBM Research reveals a concerning downside to this practice. The researchers discovered that fine-tuning LLMs can inadvertently weaken the safety measures designed to prevent the models from generating harmful content, potentially undermining the very goals of fine-tuning the models in the first place.”

Image Credit: Florian Schmid / Unsplash

Starlink Satellites Are ‘Leaking’ Signals That Interfere With Our Most Sensitive Radio Telescopes

0

When I was a child in the 1970s, seeing a satellite pass overhead in the night sky was a rare event. Now it is commonplace: sit outside for a few minutes after dark, and you can’t miss them.

Thousands of satellites have been launched into Earth orbit over the past decade or so, with tens of thousands more planned in coming years. Many of these will be in “mega-constellations” such as Starlink, which aim to cover the entire globe.

These bright, shiny satellites are putting at risk our connection to the cosmos, which has been important to humans for countless millennia and has already been greatly diminished by the growth of cities and artificial lighting. They are also posing a problem for astronomers—and hence for our understanding of the universe.

In new research accepted for publication in Astronomy and Astrophysics Letters, my colleagues and I discovered Starlink satellites are also “leaking” radio signals that interfere with radio astronomy. Even in a “radio quiet zone” in the outback of Western Australia, we found the satellite emissions were far brighter than any natural source in the sky.

A Problem for Our Understanding of the Universe

Our team at Curtin University used radio telescopes in Western Australia to examine the radio signals coming from satellites. We found expected radio transmissions at designated and licensed radio frequencies, used for communication with Earth. In the following video, Starlink satellites emit bright flashes of radio transmission (shown in blue) at their allocated frequency of 137.5 MHz.

However, we also found signals at unexpected and unintended frequencies.

We found these signals coming from many Starlink satellites. It appears the signals may originate from electronics on board the spacecraft. In the below video, we see constant, bright emissions from Starlink satellites at 159.4 MHz, a frequency not allocated to satellite communications.

Why is this an issue? Radio telescopes are incredibly sensitive, able to pick up faint signals from countless light-years away.

Even an extremely weak radio transmitter hundreds or thousands of kilometers away from the telescope appears as bright as the most powerful cosmic radio sources we see in the sky. So these signals represent a serious source of interference.

And specifically, the signals are an issue at the location where we tested them: the site in Western Australia where construction has already begun for part of the biggest radio observatory ever conceived, the Square Kilometer Array (SKA). This project involves 16 countries, has been in progress for 30 years, and will cost billions of dollars over the next decade.

Huge effort and expense has been invested in locating the SKA and other astronomy facilities a long way away from humans. But satellites present a new threat in space, which can’t be dodged.

What Can We Do About This?

It’s important to note satellite operators do not appear to be breaking any rules. The regulations around use of the radio spectrum are governed by the International Telecommunications Union, and they are complex. At this point there is no evidence Starlink operators are doing anything wrong.

The radio spectrum is crucial for big business and modern life. Think mobile phones, WiFi, GPS and aircraft navigation, and communications between Earth and space.

However, the undoubted benefits of space-based communications—such as for globally accessible fast internet connections—are coming into conflict with our ability to see and explore the universe. (There is some irony here, as WiFi in part owes its origins to radio astronomy.)

Regulations evolve slowly, while the technologies driving satellite constellations like Starlink are developing at lightning speed. So regulations are not likely to protect astronomy in the near term.

But in the course of our research, we have had a very positive engagement with SpaceX engineers who work on the Starlink satellites. It is likely that the goodwill of satellite operators, and their willingness to mitigate the generation of these signals, is the key to solving the issue.

In response to earlier criticisms, SpaceX has made improvements to the amount of sunlight Starlink satellites reflect, making them one-twelfth as bright in visible light as they used to be.

We estimate emissions in radio wavelengths will need to be reduced by a factor of a thousand or more to avoid significant interference with radio astronomy. We hope these improvements can be made, in order to preserve humanity’s future view of the universe, the fundamental discoveries we will make, and the future society-changing technologies (like WiFi) that will emerge from those discoveries.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Rafael Schmall / NOIRLab

These CRISPR-Engineered Super Chickens Are Resistant to Bird Flu

0

The gene editing tool CRISPR may be crucial in fighting off one of the deadliest viruses circulating the globe—a virus that has killed hundreds of millions since 2020.

It’s not Covid-19, of course. The virus is a type of especially aggressive bird flu that’s decimated chicken populations worldwide. Heartbreakingly, numerous flocks have been culled to contain the disease. Those skyrocketing price tags for a dozen eggs? This flu strain is partly to blame.

Grocery bills aside, the virus’ wildfire spread among poultry also raises the alarming prospect it could hop into other species—including humans. According to the World Health Organization, 10 countries across three continents have reported signs of the bird flu virus in mammals since 2022, sparking worries of another pandemic.

Several countries have launched vaccination campaigns to battle the virus. But it’s a formidable enemy. Like human flu strains, the virus rapidly mutates and makes vaccines less effective over time.

But what if we could nip infections in the bud?

This week, a team from the UK engineered “super chickens” resilient to a common bird flu. In chicken primordial germ cells—those that develop into sperm and egg—they used CRISPR-Cas9 to tweak a single gene that’s critical for virus reproduction.

The edited chickens grew and behaved like their non-edited “control” peers. They were healthy, laid eggs in the usual numbers, and clucked happily in their pens. But their genetic enhancement shined through when challenged with a real-life dose of flu similar to what might circulate in an infected coop. The edited chickens fought off the virus. All control birds caught the flu.

The results are “a long-awaited achievement,” Dr. Jiří Hejnar at the Czech Academy of Sciences’s Institute of Molecular Genetics, who was not involved in the study, told Science. Back in 2020, Hejnar used CRISPR to engineer chickens resistant to a cancer-causing virus, paving the road for efficient gene editing in birds.

The technology still has a ways to go. Despite the genetic boost, half of the edited birds got sick when challenged with a large dose of virus. This part of the experiment also raised a red flag: the virus rapidly adapted to the gene edits with mutations that made it a better spreader—not just among birds, but also gaining mutations that could hop into humans.

“This showed us a proof of concept that we can move towards making chickens resistant to the virus,” study author Dr. Wendy Barclay at Imperial College London said in a press conference. “But we’re not there yet.”

The Target

In 2016, Barclay discovered a chicken gene that bird flu viruses use to infect and grow inside chicken cells. Called ANP32A, it’s part of a gene family that translates DNA information into other biochemical messengers to build proteins. Once inside a bird cell, the flu virus can co-opt the gene’s products to make more copies of itself and spread to nearby cells.

ANP32A isn’t the only genetic link between cells and virus. A later study found a second “protective” gene that blocks flu viruses from growing in cells. The gene is similar to ANP32A, but with two major changes severing the virus’ connection to the cell like closing a door. Because viruses require a host to reproduce, the roadblock essentially cuts off their lifeline.

“If you could disrupt that [gene-virus] interaction in some way…perhaps by this gene editing, then the virus would not be able to replicate,” said Barclay.

The new study followed this line of thought. Using CRISPR, they altered ANP32A in chicken primordial germ cells by splicing in the two genetic changes observed in the protective gene. The cells, when injected into chicken embryos, grew into edited sperm and eggs in healthy mature chickens, who went on to have chicks with the edited ANP32A gene.

The process sounds technical, but it’s basically a 21st-century speed-up of an ancient farming technique: breed animals to preserve wanted traits—in this case, resistance against viruses.

The Stand

The team tested the edited chickens with several virus challenges.

In one, they squirted a dose of bird flu virus into the noses of 20 two-week-old chicks—half of which were genetically-modified, the others normally bred. The procedure sounds intense, but the amount of virus was carefully tailored to that normally present in an infected coop.

All 10 control birds got sick. In contrast, only one of the edited chickens was infected. And even so, it didn’t transmit the virus to the other edited birds.

In a second test, the team amped up the dosage to about 1,000 times more than the original nose spritz. Every single bird, regardless of their genetic makeup, caught the virus. However, the edited birds took longer to develop flu symptoms. They also harbored lower levels of the virus and were less likely to transmit it to others in their coop—regardless of genetic makeup.

At first glance, the results sound promising. But they also raised a red flag. The reason the viruses infected the edited chickens despite their protective “super genes” was that the buggers rapidly adapted to the genetic edits. In other words, a gene swap meant to protect livestock could, ironically, push the virus to evolve more rapidly.

The Golden Trio

Why would this happen? Several tests found mutations in the viral genome likely allowed the viruses to grab onto other members of the ANP32A family. These proteins normally sit on the bench during viral invasions of flu and silently resist viral replication. But over time, the virus learned to work with each gene to boost its reproduction.

The team is well aware that similar changes could allow the virus to infect other species, including humans. “We were not alarmed by the mutations that we saw, but the fact we got breakthrough [infection] means we need more rigorous edits going forward,” said Barclay.

Dr. Sander Herfst at Erasmus University Medical Center, who studies bird flu’s incursion into mammals, agrees. “A water-tight system where no more [viral] replication takes place in chickens is necessary,” he told Science.

One potential solution is more gene editing. ANP32A is only one of three gene members that help viruses thrive. In a preliminary test, the team disabled all three genes in cells in a petri dish. The edited cells resisted a highly dangerous strain of flu virus.

But it’s still not a perfect solution. These genes are multitaskers that regulate health and fertility. Editing all three could damage a chicken’s health and ability to reproduce. The challenge now is to find gene edits that ward off viruses but still maintain normal function.

Biotechnology aside, regulations and public opinion are also struggling to catch up to the gene-editing world. CRISPRed animals are currently considered genetically modified organisms (GMOs) under European Union laws, a designation that comes with a load of regulatory baggage and public perception troubles. However, because gene edits like the ones in the study mimic those that might naturally occur in nature—rather than splicing genes from one organism into another—some CRISPRed animals may be more acceptable to consumers.

“I think the world is changing,” said study author Dr. Helen Sang, an expert who’s worked on flu-resistant birds for three decades. Regulations on gene-edited animals for food will likely shift as the technology matures—but in the end, what’s acceptable will depend on multicultural views.

Image Credit: Toni Cuenca / Unsplash

Tens of Thousands of People Can Now Order a Waymo Robotaxi Anywhere in San Francisco

0

On Monday, Waymo announced on X that it’s expanding its city-wide, fully autonomous robotaxi service to thousands more riders in San Francisco.

The company had been testing a service area of nearly the whole city (around 47 square miles) with employees and, later, a group of test riders. But most people using the service were precluded from riding in the city’s dense northeast corner, an area including Fisherman’s Wharf, the Embarcadero, and Chinatown.

Now, the full San Francisco service area will be available to all current Waymo One users—amounting to tens of thousands of people, according to TechCrunch. While it’s a significant increase, not just anyone can use Waymo in SF yet. The company has been growing the service by admitting new riders from a waitlist that numbered 100,000 in June.

“This territory expansion applies to those riders who currently have access to our service and all those to be added from the waitlist in the near future,” Waymo spokesperson Christopher Bonelli told the Verge. “We are still seeing very strong demand, so we want to scale responsibly to maintain service quality and good user experience.”

It’s a milestone years in the making. Waymo traces its roots back to 2009, when it was the Google self-driving car project. The project first began testing the technology on public streets with safety drivers behind the wheel in Mountain View, California. Google spun the project out as Waymo, a standalone company under the Alphabet umbrella, in 2016 and began offering services with a public trial in Phoenix the next year. Testing began in San Francisco in 2021.

San Francisco has proven a more challenging environment than Phoenix, with aggressive urban drivers, steep hills, and at-times narrow, winding streets. Early on, commercial services were restricted to rides with a safety driver behind the wheel. Waymo and GM’s Cruise received approval from California’s Public Utilities Commission to charge riders for autonomous rides day and night without a safety driver this August.

The expansion has not been without controversy. Self-driving cars have blocked traffic and been involved in high-profile incidents, including a collision between a Cruise vehicle and a fire truck. Most recently, a pedestrian hit by another car—with a human at the wheel—was knocked in front of a Cruise vehicle. The car braked “aggressively” but could not avoid the pedestrian and came to a stop on her leg, pinning her to the street.

The California DMV asked Cruise to halve its San Francisco fleet last month while it investigated recent incidents. The City of San Francisco, meanwhile, has contested the decision to green-light expansion, and protesters have been disabling vehicles by placing construction cones on the cars’ hoods to block sensors.

As the rollout widens, the companies will continue to face questions about readiness and safety. In early September, Waymo released a report coauthored with insurance giant Swiss Re claiming its cars are safer than human drivers. In his own analysis of crash data, published the week before, technology reporter Timothy B. Lee wrote there’s uncertainty in the statistics and comparing self-driving cars to human drivers is difficult.

Still, he found that, after several million miles driven by both Cruise and Waymo, most documented collisions were low-speed and and often the fault of another driver. This was especially true for Waymo, which he found had a comparatively cleaner safety record.

“Human beings drive close to 100 million miles between fatal crashes, so it will take hundreds of millions of driverless miles for 100 percent certainty on this question,” he wrote. “But the evidence for better-than-human performance is starting to pile up, especially for Waymo.” Lee also suggested that even more transparency on performance is needed to confidently assess the overall safety record of self-driving cars.

As they navigate recent criticism, both projects have plans to further expand. Cruise has announced testing in 14 new cities and is aiming for revenue of $1 billion in 2025. In addition to San Francisco and Phoenix, Waymo is building out services in Los Angeles and Austin. The company will also begin testing electric self-driving vans made in partnership with Geely Zeekr—the vans lack steering wheel and side mirrors—later this year.

While continued caution is warranted, Cruise and Waymo are also likely feeling some pressure financially. The two projects have poured billions into development of their self-driving platforms and still operate at a loss. In the coming months and years, they’ll have to prove they can be profitable—without compromising on safety.

Image Credit: Waymo

Bioprinted Skin Heals Wounds in Pigs With Minimal Scarring—Humans Are Next

0

Our skin is a natural wonder of bioengineering.

The largest organ in the body, it’s a waterproof defense system that protects against infections. It’s packed with sweat glands that keep us cool in soaring temperatures. It can take a serious beating—sunburns, scratches and scrapes, cooking oil splatters, and other accidents in daily life—but rapidly regenerates. Sure, there may be lasting scars, but signs of lesser damage eventually fade away.

Given these perks, it’s no wonder scientists have tried recreating skin in the lab. Artificial skin could, for example, cover robots or prosthetics to give them the ability to “feel” temperature, touch, or even heal when damaged.

It could also be a lifesaver. The skin’s self-healing powers have limits. People who suffer from severe burns often need a skin transplant taken from another body part. While effective, the procedure is painful and increases the chances of infection. In some cases, there might not be enough undamaged skin left. A similar dilemma haunts soldiers wounded in battle or those with inherited skin disorders.

Recreating all the skin’s superpowers is tough, to say the least. But last week, a team from Wake Forest University took a large step towards artificial skin that heals large wounds when transplanted into mice and pigs.

The team used six different human skin cell types as “ink” to print out three-layered artificial skin. Unlike previous iterations, this artificial skin closely mimics the structure of human skin.

In proof-of-concept studies, the team transplanted the skin into mice and pigs with skin injuries. The skin grafts rapidly tapped into blood vessels from surrounding skin, integrating into the host. They also helped shape collagen—a protein essential for healing wounds and reducing scarring—into a structure similar to natural skin.

“These results show that the creation of full thickness human bioengineered skin is possible, and promotes quicker healing and more naturally appearing outcomes,” said study author Dr. Anthony Atala.

Wait…What’s Full Thickness Skin?

We often picture the skin as a fitted sheet that wraps around the body. But under the microscope, it’s an intricate masterpiece of bio-architecture.

Or I like to think of it as a three-layered cake.

Each layer has different cell types tailored to their distinctive functions. The top layer is the guardian. A direct link to the outside world, it has cell types that can endure UV light, arid weather, and harmful bacteria. It also houses cells that produce pigmentation. These cells continuously shed when damaged and are replaced to keep the barrier strong.

The middle layer is the bridge. Here, blood vessels and nerve fibers connect the skin to the rest of the body. This layer is packed with cells that produce body hair, sweat, and lubricating oils—the bane of anyone prone to acne. As the widest layer, it’s held tightly together by collagen, which gives the skin its flexibility and strength.

Finally, the deepest skin layer is the “puffy coat.” Made primarily of collagen and fat cells, this layer is a shock absorber that protects the skin from injuries and helps maintain body heat.

Recreating all these structures and functions is incredibly hard. Atala’s solution? Three-dimensional bioprinting.

Skin in the Game

Atala is no stranger to bioprinting.

In 2016, his team developed a tissue-organ printer that can print large tissues of any shape. Using clinical data, the team made computer models to guide the printer when printing various bone structures and muscles. A few years later, they engineered a skin bioprinter that used two cell types—from either the top or middle layer—to directly patch injured skin. Though the skin could close large wounds, it only captured part of natural skin’s complexity.

The new study used six types of human cells as bioink, recreating our skin’s architecture top to bottom. To manufacture the artificial skin, the team used computer software to direct the placement of cells in each layer. Called 3D-extrusion printing, the technology uses air pressure to print highly sophisticated tissues out of a nozzle. It sounds complicated, but it’s a bit like squeezing out icing of different colors to decorate a cake.

As a first step, the team suspended cells in a hydrogel made primarily of a liver-secreted protein. Unlike synthetic materials, this body-produced base increases biocompatibility. The team then printed a 3D skin graft, layer by layer, measuring an inch on each side—a bit bigger than a sugar cube.

The bioprinted skin maintained its three layers for at least 52 days in the lab and developed areas with pigmentation and normal shedding.

Encouraged, the team next tested the artificial skin in mice. All wounds treated with the artificial skin grafts completely healed in two weeks, as opposed to those treated with only the hydrogel or letting the wound heal naturally.

The artificial skin was especially good at building the skin’s upper protective layer, forming structures that resembled natural healing. It also produced collagen, and—more importantly—weaved it into a wicker-basket-like structure similar to human skin.

The bioprinted skin further recruited the mice’s own blood vessel cells, generating a network of small vessels inside the graft. Using a stain to track human proteins in the graft, the team found the transplanted cells integrated with their host in the middle layer of the skin.

Squeaking By?

Mice have thinner skin than humans. Pigs’ skins, in contrast, are closer to ours. In a second test, the team scaled up the technology for transplantation in pigs. Here, they harvested four types of cells from pigs through biopsies—including some that make up the skin’s outer layer, collagen, blood vessels, and fatty tissue—and grew them inside a bioreactor for 28 days.

Some batches failed. On average, however, the brew generated enough cells to double the size of the initial graft for greater coverage. The resulting artificial skin patch was roughly the size of the face of a Rubik’s cube and matched the thickness of the pig’s skin.

Like the results in mice, the grafts rapidly closed large wounds without the usual “puckering” effect—where the skin constricts like a grape to a raisin—that leads to scarring.

The team concluded this is likely because the graft amplified genes responsible for wound healing, with some also regulating immune responses that help grow new blood vessels and reduce scarring.

The artificial skin is promising but still in its infancy. When grafted onto pigs, it didn’t reliably produce pigmentation, which could be troubling to those with darker skin tones. The grafts also didn’t produce body hair, though they contained structures for its growth in the bioink. While it might not be the worst (no more shaving!), the results suggest there’s still a lot to learn.

To Atala, the effort’s worth it. “Comprehensive skin healing is a significant clinical challenge, affecting millions of individuals worldwide, with limited options,” he said. The study suggests printing full-scale skin is possible for treating devastating wounds in humans.

Image Credit: A normal skin cell under the microscope. Torsten Wittmann, University of California, San Francisco (via NIH/Flickr)

How Flash Heating Plastic Waste Could Produce Green Hydrogen and Graphene

0

Hydrogen could be a green fuel of the future, but at present it’s mainly made out of fossil fuels in a process that generates a lot of CO2. A new technique, however, generates hydrogen gas from plastic waste with no direct carbon emissions, while creating valuable graphene as a byproduct.

Batteries are currently the leading approach to decarbonizing transportation, but using hydrogen as a fuel still has considerable advantages. It has significantly higher energy density, which could give hydrogen-powered vehicles greater range, and refueling with hydrogen is much faster than recharging a battery. It’s also a promising fuel for heavy industries like steelmaking that can’t be easily electrified and could be useful for long-term energy storage.

Hydrogen’s green credentials depend heavily on how it’s produced though. Using electricity to split water into hydrogen and oxygen can be sustainable if powered by renewable energy. But the process is currently very expensive, and most hydrogen today is instead made by reacting methane from fossil fuels with steam, producing considerable amounts of CO2 as a byproduct.

A promising new process developed by researchers at Rice University generates hydrogen from plastic waste without directly emitting CO2. Of course, it too would need to be powered by renewable energy. But in addition to yielding hydrogen, the process also produces commercial-grade graphene as a byproduct, which can be sold to pay for the hydrogen production.

“We converted waste plastics—including mixed waste plastics that don’t have to be sorted by type or washed—into high-yield hydrogen gas and high-value graphene,” Kevin Wyss, who led the research while doing his PhD at Rice, said in a press release. “If the produced graphene is sold at only 5 percent of current market value—a 95 percent off sale—clean hydrogen could be produced for free.”

The new process relies on a technique known as flash joule heating, which was developed in the lab of Rice professor James Tour. It involves grinding plastic into confetti-size pieces, mixing it with a conductive material, placing it in a tube, and then passing a very high voltage through it. This heats the mixture to around 5,000 degrees Fahrenheit in just 4 seconds, causing the carbon atoms in the plastic to fuse together into graphene and releasing a mix of volatile gases.

The lab initially focused on using the technique to turn waste plastic into graphene, and Tour founded a startup called Universal Matter to commercialize the process. But after analyzing the composition of the vapor byproducts, the team realized they contained a significant amount of hydrogen gas with a purity as high as 94 percent. The results were published in a recent paper in Advanced Materials.

By locking up all the plastic’s carbon in graphene, the approach produces hydrogen without releasing any CO2. And the economics are very attractive compared to other methods of producing green hydrogen—the feedstock is a waste product, and selling the graphene for even a fraction of the current market price essentially means the hydrogen is being produced for free.

Getting the process to work at an industrial scale will inevitably be challenging, Upul Wijayantha at Cranfield University in the UK, told New Scientist. “We don’t know, beyond the lab scale, what kind of challenges they will encounter when they handle a massive scale of plastics, gas mixtures and byproducts, like graphene,” he says.

Nonetheless, Tour is optimistic that the approach could be commercialized relatively quickly. “You could have a smaller-scale deployment for generating hydrogen certainly within five years,” he told New Scientist. “You could have a large-scale deployment within 10.”

If he’s right, the new technique could kill two birds with one stone—helping tackle plastic waste and producing green fuels all at once.

Image Credit: Layered stacks of flash graphene formed from plastic waste. (Kevin Wyss/Tour lab)

This Week’s Awesome Tech Stories From Around the Web (Through October 7)

0

VIRTUAL REALITY

The Las Vegas Sphere Makes Virtual Reality a Full-Body Experience
Steven Levy | Wired
You know that movie Tron where someone got sucked into a video game? Being inside the Sphere was a real-life sci-fi film where 18,000 people were suddenly inside an over-the-top 1980s music video. The super high resolution display bounded over the uncanny valley, showing scenery from places both real and imagined that convincingly made it seem like the band—and audience—had been transported to bizarre locales.”

ARTIFICIAL INTELLIGENCE

Tiny Language Models Come of Age
Ben Brubaker | Quanta
“To better understand how neural networks learn to simulate writing, researchers trained simpler versions on synthetic children’s stories. …[They] showed that language models thousands of times smaller than today’s state-of-the-art systems rapidly learned to tell consistent and grammatical stories when trained in this way.”

ROBOTICS

Google DeepMind Unites Researchers in Bid to Create an ImageNet of Robot Actions
Brian Heater | TechCrunch
i‘Building a dataset of diverse robot demonstrations is the key step to training a generalist model that can control many different types of robots, follow diverse instructions, perform basic reasoning about complex tasks and generalize effectively,’ said DeepMind researchers, Quan Vuong and Pannag Sanketi. They add that such a task is far too large to entrust to a single lab. The database features more than 500 skills and 150,000 tasks pulled from 22 different robot types.”

NEUROSCIENCE

A Lab Just 3D-Printed a Neural Network of Living Brain Cells
Celia Ford | Wired
“You can 3D-print nearly anything: rockets, mouse ovaries, and for some reason, lamps made of orange peels. Now, scientists at Monash University in Melbourne, Australia, have printed living neural networks composed of rat brain cells that seem to mature and communicate like real brains do.”

ENERGY

Inside the Global Race to Tap Potent Offshore Wind
Peter Fairley | IEEE Spectrum
“To arrest the accelerating pace of a changing climate, the world needs a lot more clean energy to electrify heating, transportation, and industry and to displace fossil-fuel generation. Offshore wind power is already playing a key role in this transition. But the steadiest, strongest wind blows over deep water—well beyond the 60- to 70-meter limit for the fixed foundations that anchor traditional wind turbines to the ocean floor.”

ROBOTICS

Meet the Next Generation of Doctors—and Their Surgical Robots
Neha Mukherjee | Wired
“Don’t worry, your next surgeon will definitely be a human. But just as medical students are training to use a scalpel, they’re also training to use robots designed to make surgeries easier. …’Some residency programs didn’t see the benefit of teaching their surgery residents robotics,’ says [Alisa Coker, the director of robotic surgery education at Johns Hopkins]. ‘But over the last six years, residents started demanding to be taught robotics … They were asking that we prepare a curriculum to teach them.’i

ETHICS

AI Firms Working on ‘Constitutions’ to Keep AI From Spewing Toxic Content
Madhumita Murgia | Financial Times (via Ars Technica)
The goal is for AI to learn from these fundamental principles and keep itself in check, without extensive human intervention. ‘We, humanity, do not know how to understand what’s going on inside these models, and we need to solve that problem,’ said Dario Amodei, chief executive and co-founder of AI company Anthropic. Having a constitution in place makes the rules more transparent and explicit so anyone using it knows what to expect. ‘And you can argue with the model if it is not following the principles,’ he added.”

TECH

Chatbot Hallucinations Are Poisoning Web Search
Will Knight | Wired
“Untruths spouted by chatbots ended up on the web—and Microsoft’s Bing search engine served them up as facts. …The problem of search results becoming soured by AI content may get a lot worse as SEO pages, social media posts, and blog posts are increasingly made with help from AI. This may be just one example of generative AI eating itself like an algorithmic ouroboros.”

LAW AND ETHICS

Artists Are Losing the War Against AI
Matteo Wong | The Atlantics
“OpenAI has introduced a tool for artists to keep their images from training future AI programs. …But opting out is an onerous process, and may be too complex to meaningfully implement or enforce. The ability to withdraw one’s work might also be coming too late: Current AI models have already digested a massive amount of work, and even if a piece of art is kept away from future programs, it’s possible that current models will pass the data they’ve extracted from those images on to their successors.”

SPACE

Could Earth Be the Only Planet With Intelligent Life?
Ethan Seigel | Big Think
We live in a vast observable Universe, with sextillions of stars and even larger numbers of planets, with more unobservable Universe and potentially even a multiverse beyond what we can see. While the ingredients for life and living planets are seemingly everywhere, however, we have yet to detect a single sign of life beyond Earth—even the simplest forms of life—on any world at all. Is it possible, or even likely, that despite all of the chances out there for intelligent life, planet Earth is home to the only instance of it in all of reality?”

Image Credit: Joseph Corl / Unsplash

Scientists Unearth Brand New Links Between Genes and Disease in Our Blood

0

A blood draw is one the most mundane clinical tests. It can also be a Rosetta stone for decoding genetic information and linking DNA typos to health and disease.

This week, three studies in Nature focused on the watery component of blood—called plasma—as a translator between genes and bodily functions. Devoid of blood cells, plasma is yellowish in color and packs thousands of proteins that swirl through the bloodstream. Plasma proteins trigger a myriad of biological processes: they tweak immune responses, alter metabolism, and even spur—or hinder—new connections in the brain.

They’re also a bridge between our genetics and health.

Ever since first mapping the human genome, scientists have tried to link genetic typos to health and disease. It’s a tough problem. Some of our most troubling health concerns—cancer, heart and vascular disease, and dementia and other brain disorders—are influenced by multiple genes working in concert. Diet, exercise, and other lifestyle factors muddle gene-to-body connections.

The new studies tapped into the UK Biobank, a comprehensive database containing plasma samples from over 500,000 people alongside their health and genetic data.

The research found multiple protein “signatures” in plasma that mapped onto specific parts of the genetic code—for example, rare DNA letter edits that were previously hard to capture. Digging deeper, several plasma protein signatures reflected genetic changes that linked to fatty liver disease. Other associations between gene and plasma predicted blood type, gut health, and other physical traits.

These proof-of-concept examples may bring new medical discoveries. Plasma is easily obtainable through a blood draw. As a translator between genetic and physical profiles, their protein signatures can potentially inform new medications, diagnosis, or treatments.

To be very clear: the trio of studies came from an unexpected coalition—13 biopharmaceutical companies working together in a precompetitive pact. The arrangement is exactly what it sounds like. Instead of competing against each other, the companies are sharing results to solve one of the toughest biological mysteries—how do genes, with a hefty dose of environmental influences, make us who we are.

Pharma Frenemies

Back in 2020, a handful of the world’s most influential pharmaceutical companies made a pact to collaborate on a single endeavor—the Pharma Proteomics Project.

The UK Biobank, one of the world’s largest and most comprehensive biomedical resources, was the core organizer. First launched in 2006, the biobank has grown into an enormous database: So far, over half a million participants in the UK have signed up, including people of diverse ethnicities. The database contains biographical information—age, gender, and health status—and more in-depth measures such as brain scans, gene sequences, and blood tests.

These aren’t just clinical blood tests to check your mineral or hormone levels. Using blood samples, the Biobank has a full profile of each participant’s plasma protein.

Over the last few years, with consent from the volunteers, the Biobank has released their dataset to scientists. All genetic data were scraped of information that could trace back to any volunteer.

The massive dataset caught big pharma’s eye. Plasma proteins are easy to collect and analyze, making them perfect for diagnosing diseases. Deciphering how they work in the body could also help researchers discover potential disease targets.

Dr. Naomi Allen, chief scientist of the UK Biobank, agreed. “Measuring protein levels in the blood is crucial to understanding the link between genetic factors and the development of common life-threatening diseases,” she said when the project launched in 2020.

“With data on genetic, imaging, lifestyle factors and health outcomes over many years, this will be the largest proteomic [a collection of all proteins] study in the world to be shared as a global scientific resource.”

A Bloody Good Link

The consortium paid off.

In one study, from Biogen and collaborators, the team took a first step toward linking genetic diversity to health status.

Every human shares similar genes, but these genes vary in their precise lettering. A single-letter DNA swap can lead to inherited diseases, such as sickle cell. Other times, a gene copies itself when it’s not supposed to causing deadly neurological problems such as Huntington’s disease.

Yet how most genetic typos contribute to health largely remains a mystery.

Here, the team analyzed nearly 3,000 plasma proteins from 54,219 UK Biobank participants along with their genetic profiles. The proteins were selected to best capture a person’s general health status, including their heart health, metabolism, inflammation, brain function, and any cancer indicators.

Overall, they unearthed roughly 16 million single-letter DNA letter swaps that mapped to more than 3,700 different locations in the genome. Called “genomic loci,” these sites are extremely valuable for bridging genetic data to proteins associated with diseases. Compared to previous studies, 81 percent of these gene-to-protein associations are new.

Meanwhile, the plasma proteins formed a “fingerprint” of sorts, allowing scientists to predict a person’s age, sex, body mass index, blood groups, and even kidney and liver functions.

In one test using the plasma “fingerprint,” the team discovered a genetic network that boosts immune cell function. Other tests found an intriguing link between blood type and gut health and decoded how genetic variations affect immune responses in different people.

In other words, the team built a genetic atlas that maps onto the plasma protein universe.

Rare Genetics Swaps and Broader Ancestry

Another study called the plasma-genetics screen by its name: proteogenomic.

Led by AstraZeneca, the team mined the same biobank dataset for rare genetic variants that link to changes in plasma proteins and diseases. Integrating the two could help solve “disease mechanisms, identify clinical biomarkers, and discover drug targets,” the team said.

Scanning through the biobank, they found over 5,400 rare associations between genes and plasma protein signatures. In an early Halloween twist, two genes especially stood out: STAB1 and STAB2. Normally thought to be involved in clearing off old plasma proteins, the genes also surprisingly associated with dozens of protein partners, suggesting they have other roles.

“What’s exciting about this research is that we are now able to link these high-impact rare genetic variants to effects on the human plasma proteome,” said study author Dr. Slavé Petrovski with AstraZeneca and the University of Melbourne.

The coalition also bolstered genetic diversity in research. Most studies that associate genes to diseases are based on people from European ancestry.

Here, the third study focused on Biobank participants of either British or Irish, African and South Asian ancestries to reveal genetic “hotspots.” They then matched those data with a dataset previously collected from an Icelandic population. There’s a “modest correlation,” said the team, adding that differences in technology could have altered results—something to consider going forward.

Linking genes to proteins to health has always been a difficult game of biomedical telephone. With plasma proteins as a guide, we may have a proxy to bridge genetics to health and disease. The consortium has made all data publicly available for other research teams to explore.

Image Credit: National Institutes of Health

Organisms Without Brains Can Learn, Too—So What Does It Mean to Be a Thinking Creature?

0

The brain is an evolutionary marvel. By shifting the control of sensing and behavior to this central organ, animals (including us) are able to flexibly respond and flourish in unpredictable environments. One skill above all—learning—has proven key to the good life.

But what of all the organisms that lack this precious organ? From jellyfish and corals to our plant, fungi, and single-celled neighbors (such as bacteria), the pressure to live and reproduce is no less intense, and the value of learning is undiminished.

Recent research on the brainless has probed the murky origins and inner workings of cognition itself, and is forcing us to rethink what it means to learn.

Learning About Learning

Learning is any change in behavior as a result of experience, and it comes in many forms. At one end of the spectrum sits non-associative learning. Familiar to anyone who has “tuned out” the background noise of traffic or television, it involves turning up (sensitizing) or dialing down (habituating) one’s response with repeated exposure.

Further along is associative learning, in which a cue is reliably tied to a behavior. Just as the crinkling of a chip packet brings my dog running, so too the smell of nectar invites pollinators to forage for a sweet reward.

Higher still are forms like conceptual, linguistic, and musical learning, which demand complex coordination and the ability to reflect on one’s own thinking. They also require specialized structures within the brain, and a large number of connections between them. So, to our knowledge, these types of learning are limited to organisms with sufficient “computing power”—that is, with sufficiently complex brains.

The presumed relationship between brain complexity and cognitive ability, however, is anything but straightforward when viewed across the tree of life.

This is especially true of the fundamental forms of learning, with recent examples reshaping our understanding of what was thought possible.

Who Needs a Brain?

Jellyfish, jelly-combs, and sea anemones stand among the earliest ancestors of animals, and share the common feature of lacking a centralized brain.

Nonetheless, the beadlet anemone (Actinia equina) is able to habituate to the presence of nearby clones. Under normal circumstances it violently opposes any encroachment on its territory by other anemones. When the intruders are exact genetic copies of itself, however, it learns to recognize them over repeated interactions, and contain its usual aggression.

A recent study has now shown box jellyfish too are avid learners, and in an even more sophisticated manner. Though they possess only a few thousand neurons (nerve cells) clustered around their four eyes, they are able to associate changes in light intensity with tactile (touch) feedback and adjust their swimming accordingly.

This allows for more precise navigation of their mangrove-dominated habitats, and so improves their odds as venomous predators.

No Neurons, No Problem

Stretching our instincts further, evidence now abounds for learning in organisms that lack even the neuronal building blocks of a brain.

Slime molds are single-celled organisms that belong to the protist group. They bear a passing resemblance to fungi, despite being unrelated. Recently (and inaccurately) popularized on TV as zombie-making parasites, they also offer a striking case study in what the brainless can achieve.

Elegant experiments have documented a suite of cognitive tricks, from remembering routes to food to using past experience to inform future foraging and even learning to ignore bitter caffeine in search of nutritious rewards.

Plants too can be counted among the brainless thinkers. Venus flytraps use clever sensors to remember and tally up the touches of living prey. This allows them to close their traps and begin digestion only when they’re sure of a nutritious meal.

In less gruesome examples, the shameplant (Mimosa pudica) curls and droops its leaves to protect itself from physical disturbance. This is an energetically costly activity, however, which is why it can habituate and learn to ignore repeated false alarms. Meanwhile, the garden pea can seemingly learn to associate a gentle breeze, itself uninteresting, with the presence of essential sunlight (though this finding has not gone unchallenged).

These results have driven calls to consider plants as cognitive and intelligent agents, with the ensuing debate spanning science and philosophy.

Thinking Big

Learning, then, is not the sole province of those with a brain, or even the rudiments of one. As evidence of cognitive prowess in the brainless continues to accumulate, it challenges deep intuitions about the biology of sensation, thought, and behavior more generally.

The implications also reach beyond science into ethics, as with recent advances in our understanding of nociception, or pain perception. Do fish, for example, feel pain, despite not having the requisite brain structures like those of primates? Yes. What about insects, with an even simpler arrangement of an order-of-magnitude fewer neurons? Probably.

And if such organisms can learn and feel, albeit in ways unfamiliar to us, what does it say about how we treat them in our recreational, research, and culinary pursuits?

Above all else, these curious and diverse forms of life are a testament to the creative power of adaptive evolution. They invite us to reflect on our often-assumed seat at the apex of the tree of life, and remind us of the inherent value in studying, appreciating, and conserving lives very different from our own.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Sasha • Stories / Unsplash 

This Ant-Inspired AI Brain Helps Farm Robots Better Navigate Crops

0

Picture this: the setting sun paints a cornfield in dazzling hues of amber and gold. Thousands of corn stalks, heavy with cobs and rustling leaves, tower over everyone—kids running though corn mazes; farmers examining their crops; and robots whizzing by as they gently pluck ripe, sweet ears for the fall harvest.

Wait, robots?

Idyllic farmlands and robots may seem a strange couple. But thanks to increasingly sophisticated software allowing robots to “see” their surroundings—a technology called computer vision—they’re rapidly integrating into our food production mainline. Robots are now performing everyday chores, such as harvesting ripe fruits or destroying crop-withering weeds.

With an ongoing shortage in farmworkers, the hope is that machines could help boost crop harvests, reliably bring fresh fruits and veggies to our dinner tables, and minimize waste.

To fulfill the vision, robot farmworkers need to be able to traverse complex and confusing farmlands. Unfortunately, these machines aren’t the best navigators. They tend to get lost, especially when faced with complex and challenging terrain. Like kids struggling in a corn maze, robots forget their location so often the symptom has a name: the kidnapped robot problem.

A  new study in Science Robotics aims to boost navigational skills in robots by giving them memory.

Led by Dr. Barbara Webb at the University of Edinburgh, the inspiration came from a surprising source—ants. These critters are remarkably good at navigating to desired destinations after just one trip. Like seasoned hikers, they also remember familiar locations, even when moving through heavy vegetation along the way.

Using images collected from a roaming robot, the team developed an algorithm based on brain processes in ants during navigation. When it was run on hardware also mimicking the brain’s computations, the new method triumphed over a state-of-the-art computer vision system in navigation tasks.

“Insect brains in particular provide a powerful combination of efficiency and effectiveness,” said the team.

Solving the problem doesn’t just give wayward robotic farmhands an internal compass to help them get home. Tapping into the brain’s computation—a method called neuromorphic computing—could further finesse how robots, such as self-driving cars, interact with our world.

An Ant’s Life

If you’ve ever wandered around dense woods or corn mazes, you’ve probably asked your friends: Where are we?

Unlike walking along a city block—with storefronts and other buildings as landmarks—navigating a crop field is extremely difficult. A main reason is that it’s hard to tell where you are and what direction you’re facing because the surrounding environment looks so similar.

Robots face the same challenge in the wild. Currently, vision systems use multiple cameras to capture images as the robot transverses terrain, but they struggle to identify the same scene if lighting or weather conditions change. The algorithms are slow to adapt, making it difficult to guide autonomous robots in complex environments.

Here’s where ants come in.

Even with relatively limited brain resources compared to humans, ants are remarkably brilliant at learning and navigating complex new environments. They easily remember previous routes regardless of weather, mud, or lighting.

They can follow a route with “higher precision than GPS would allow for a robot,” said the team.

One quirk of an ant’s navigational prowess is that it doesn’t need to know exactly where it is during navigation. Rather, to find its target, the critter only needs to recognize whether a place is familiar.

It’s like exploring a new town from a hotel: you don’t necessarily need to know where you are on the map. You just need to remember the road to get to a café for breakfast so you can maneuver your way back home.

Using ant brains as inspiration, the team built a neuromorphic robot in three steps.

The first was software. Despite having small brains, ants are especially adept at fine-tuning their neural circuits for revisiting a familiar route. Based on their previous findings, the team homed in on “mushroom bodies,” a type of neural hub in ant brains. These hubs are critical for learning visual information from surroundings. The information then spreads across the ant’s brain to inform navigational decisions. For example, does this route look familiar, or should I try another lane?

Next came event cameras, which capture images like an animal’s eye might. The resulting images are especially useful for training computer vision because they mimic how the eye processes light during a photograph.

The last component is the hardware: SpiNNaker, a computer chip built to mimic brain functions. First engineered at the University of Manchester in the UK, the chip simulates the internal workings of biological neural networks to encode memory.

Weaving all three components together, the team built their ant-like system. As a proof of concept, they used the system to power a mobile robot as it navigated difficult terrain. The robot, roughly the size of an extra-large hamburger—and aptly named the Turtlebot3 burger— captured images with the event camera as it went on its hike.

As the robot rolled through forested lands, its neuromorphic “brain” rapidly reported “events” using pixels of its surroundings. The algorithm triggered a warning event, for example, if branches or leaves obscured the robot’s vision.

The little bot traversed roughly 20 feet in vegetation of various heights and learned from its treks. This range is typical for an ant navigating its route, said the team. In multiple tests, the AI model broke down data from the trip for more efficient analysis. When the team changed the route, the AI responded accordingly with confusion—wait, was this here before—showing that it had learned the usual route.

In contrast, a popular algorithm struggled to recognize the same route. The software could only follow a route if it saw the exact same video recording. In other words, compared to the ant-inspired algorithm, it couldn’t generalize.

A More Efficient Robot Brain

AI models are notoriously energy-hungry. Neuromorphic systems could slash their gluttony.

SpiNNaker, the hardware behind the system, puts the algorithm on an energy diet. Based on the brain’s neural network structures, the chip supports massively parallel computing, meaning that multiple computations can occur at the same time. This setup doesn’t just decrease data processing lag, but also boosts efficiency.

In this setup, each chip contains 18 cores, simulating roughly 250 neurons. Each core has its own instructions on data processing and stores memory accordingly.  This kind of distributed computing is especially important when it comes to processing real-time feedback, such as maneuvering robots in difficult terrain.

As a next step, the team is digging deeper into ant brain circuits. Exploring neural connections between different brain regions and groups could further boost a robot’s efficiency. In the end, the team hopes to build robots that interact with the world with as much complexity as an ant.

Image Credit: Faris MohammedUnsplash 

Quantum Computers Could Crack Encryption Sooner Than Expected With New Algorithm

0

One of the most well-established and disruptive uses for a future quantum computer is the ability to crack encryption. A new algorithm could significantly lower the barrier to achieving this.

Despite all the hype around quantum computing, there are still significant question marks around what quantum computers will actually be useful for. There are hopes they could accelerate everything from optimization processes to machine learning, but how much easier and faster they’ll be remains unclear in many cases.

One thing is pretty certain though: A sufficiently powerful quantum computer could render our leading cryptographic schemes worthless. While the mathematical puzzles underpinning them are virtually unsolvable by classical computers, they would be entirely tractable for a large enough quantum computer. That’s a problem because these schemes secure most of our information online.

The saving grace has been that today’s quantum processors are a long way from the kind of scale required. But according to a report in Science, New York University computer scientist Oded Regev has discovered a new algorithm that could reduce the number of qubits required substantially.

The approach essentially reworks one of the most successful quantum algorithms to date. In 1994, Peter Shor at MIT devised a way to work out which prime numbers need to be multiplied together to give a particular number—a problem known as prime factoring.

For large numbers, this is an incredibly difficult problem that quickly becomes intractable on conventional computers, which is why it was used as the basis for the popular RSA encryption scheme. But by taking advantage of quantum phenomena like superposition and entanglement, Shor’s algorithm can solve these problems even for incredibly large numbers.

That fact has led to no small amount of panic among security experts, not least because hackers and spies can hoover up encrypted data today and then simply wait for the development of sufficiently powerful quantum computers to crack it. And although post-quantum encryption standards have been developed, implementing them across the web could take many years.

It is likely to be quite a long wait though. Most implementations of RSA rely on at least 2048-bit keys, which is equivalent to a number 617 digits long. Fujitsu researchers recently calculated that it would take a completely fault-tolerant quantum computer with 10,000 qubits 104 days to crack a number that large.

However, Regev’s new algorithm, described in a pre-print published on arXiv, could potentially reduce those requirements substantially. Regev has essentially reworked Shor’s algorithm such that it’s possible to find a number’s prime factors using far fewer logical steps. Carrying out operations in a quantum computer involves creating small circuits from a few qubits, known as gates, that perform simple logical operations.

In Shor’s original algorithm, the number of gates required to factor a number is the square of the number of bits used to represent it, which is denoted as n2. Regev’s approach would only require n1.5 gates because it searches for prime factors by carrying out smaller multiplications of many numbers rather than very large multiplications of a single number. It also reduces the number of gates required by using a classical algorithm to further process the outputs.

In the paper, Regev estimates that for a 2048-bit number this could reduce the number of gates required by two to three orders of magnitude. If true, that could enable much smaller quantum computers to crack RSA encryption.

However, there are practical limitations. For a start, Regev notes that Shor’s algorithm benefits from a host of optimizations developed over the years that reduce the number of qubits required to run it. It’s unclear yet whether these optimizations would work on the new approach.

Martin Ekerå, a quantum computing researcher with the Swedish government, also told Science that Regev’s algorithm appears to need quantum memory to store intermediate values. Providing that memory will require extra qubits and eat into any computational advantage it has.

Nonetheless, the new research is a timely reminder that, when it comes to quantum computing’s threat to encryption, the goal posts are constantly moving, and shifting to post-quantum schemes can’t happen fast enough.

Image Credit: Google

When Hordes of Little AI Chatbots Are More Useful Than Giants Like ChatGPT

0

AI is developing rapidly. ChatGPT has become the fastest-growing online service in history. Google and Microsoft are integrating generative AI into their products. And world leaders are excitedly embracing AI as a tool for economic growth.

As we move beyond ChatGPT and Bard, we’re likely to see AI chatbots become less generic and more specialized. AIs are limited by the data they’re exposed to in order to make them better at what they do—in this case, mimicking human speech and providing users with useful answers.

Training often casts the net wide, with AI systems absorbing thousands of books and web pages. But a more select, focused set of training data could make AI chatbots even more useful for people working in particular industries or living in certain areas.

The Value of Data

An important factor in this evolution will be the growing costs of amassing training data for advanced large language models (LLMs), the type of AI that powers ChatGPT. Companies know data is valuable: Meta and Google make billions from selling advertisements targeted with user data. But the value of data is now changing. Meta and Google sell data “insights”; they invest in analytics to transform many data points into predictions about users.

Data is valuable to OpenAI—the developer of ChatGPT—in a subtly different way. Imagine a tweet: “The cat sat on the mat.” This tweet is not valuable for targeted advertisers. It says little about a user or their interests. Maybe, at a push, it could suggest interest in cat food and Dr. Suess.

But for OpenAI, which is building LLMs to produce human-like language, this tweet is valuable as an example of how human language works. A single tweet cannot teach an AI to construct sentences, but billions of tweets, blogposts, Wikipedia entries, and so on, certainly can. For instance, the advanced LLM GPT-4 was probably built using data scraped from X (formerly Twitter), Reddit, Wikipedia and beyond.

The AI revolution is changing the business model for data-rich organizations. Companies like Meta and Google have been investing in AI research and development for several years as they try to exploit their data resources.

Organizations like X and Reddit have begun to charge third parties for API access, the system used to scrape data from these websites. Data scraping costs companies like X money, as they must spend more on computing power to fulfill data queries.

Moving forward, as organizations like OpenAI look to build more powerful versions of its GPT models, they will face greater costs for acquiring data. One solution to this problem might be synthetic data.

Going Synthetic

Synthetic data is created from scratch by AI systems to train more advanced AI systems—so that they improve. They are designed to perform the same task as real training data but are generated by AI.

It’s a new idea, but it faces many problems. Good synthetic data needs to be different enough from the original data it’s based on in order to tell the model something new, while similar enough to tell it something accurate. This can be difficult to achieve. Where synthetic data is just convincing copies of real-world data, the resulting AI models may struggle with creativity, entrenching existing biases.

Another problem is the “Hapsburg AI” problem. This suggests that training AI on synthetic data will cause a decline in the effectiveness of these systems—hence the analogy using the infamous inbreeding of the Hapsburg royal family. Some studies suggest this is already happening with systems like ChatGPT.

One reason ChatGPT is so good is because it uses reinforcement learning with human feedback (RLHF), where people rate its outputs in terms of accuracy. If synthetic data generated by an AI has inaccuracies, AI models trained on this data will themselves be inaccurate. So the demand for human feedback to correct these inaccuracies is likely to increase.

However, while most people would be able to say whether a sentence is grammatically accurate, fewer would be able to comment on its factual accuracy—especially when the output is technical or specialized. Inaccurate outputs on specialist topics are less likely to be caught by RLHF. If synthetic data means there are more inaccuracies to catch, the quality of general-purpose LLMs may stall or decline even as these models “learn” more.

Little Language Models

These problems help explain some emerging trends in AI. Google engineers have revealed that there is little preventing third parties from recreating LLMs like GPT-3 or Google’s LaMDA AI. Many organizations could build their own internal AI systems, using their own specialized data, for their own objectives. These will probably be more valuable for these organizations than ChatGPT in the long run.

Recently, the Japanese government noted that developing a Japan-centric version of ChatGPT is potentially worthwhile to their AI strategy, as ChatGPT is not sufficiently representative of Japan. The software company SAP has recently launched its AI “roadmap” to offer AI development capabilities to professional organizations. This will make it easier for companies to build their own, bespoke versions of ChatGPT.

Consultancies such as McKinsey and KPMG are exploring the training of AI models for “specific purposes.” Guides on how to create private, personal versions of ChatGPT can be readily found online. Open source systems, such as GPT4All, already exist.

As development challenges—coupled with potential regulatory hurdles—mount for generic LLMs, it is possible that the future of AI will be many specific little—rather than large—language models. Little language models might struggle if they are trained on less data than systems such as GPT-4.

But they might also have an advantage in terms of RLHF, as little language models are likely to be developed for specific purposes. Employees who have expert knowledge of their organization and its objectives may provide much more valuable feedback to such AI systems, compared with generic feedback for a generic AI system. This may overcome the disadvantages of less data.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mohamed Nohassi / Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through September 30)

0

ARTIFICIAL INTELLIGENCE

A Silicon Valley Supergroup Is Coming Together to Create an AI Device
Tripp Mickle and Cade Metz | The New York Times
“[Sam Altman and Jony Ive] and their companies are teaming up to develop a device that would succeed the smartphone and deliver the benefits of AI in a new form factor, unconstrained by the rectangular screen that has been the dominant computing tool of the past decade, according to two people familiar with the discussions.”

VIRTUAL REALITY

Mark Zuckerberg Just Previewed Meta’s New VR Avatars—and They Don’t Suck
Tom Carter | Business Insider
In a ‘Metaverse interview’ on the Lex Fridman podcast, Zuckerberg showed off Meta’s new Codec virtual reality avatars that use scanning technology to build 3D models of the user’s face. And unlike the company’s original Metaverse avatars, which were mocked for being legless, dead-eyed, and deeply disconcerting, these projections are extremely realistic.”

ARTIFICIAL INTELLIGENCE

The New ChatGPT Can ‘See’ and ‘Talk.’ Here’s What It’s Like.
Kevin Roose | The New York Times
“Having an AI speak to you in a humanlike voice is a more intimate experience than reading its responses on a screen. And after a few hours of talking with ChatGPT this way, I felt a new warmth creeping into our conversations. Without being tethered to a text interface, I felt less pressure to come up with the perfect prompt. We chatted more casually, and I revealed more about my life.”

FUTURE

What If the Robots Were Very Nice While They Took Over the World?
Virginia Heffernan | Wired
“First it was chess and Go. Now AI can beat us at Diplomacy, the most human of board games. The way it wins offers hope that maybe AI will be a delight. …Bots like Cicero are going to understand our wants and needs and align with our distinctive worldviews. We will form buddy-movie partnerships that will let us drink from their massive processing power with a spoonful of sugary natural language. And if forced at the end of the road to decide whether to lose to obnoxious humans or gracious bots, we won’t give it a thought.”

ROBOTICS

Exoskeleton Suit Boosts Your Legs to Help You Run Faster
Carissa Wong | New Scientist
Researchers have previously developed exoskeleton devices that help people to walk or jog more efficiently. Now, Giuk Lee at Chung-Ang University in Seoul, South Korea, and his colleagues have created an ‘exosuit’ that enables people to sprint faster. …To test its performance, the team asked nine non-elite runners to sprint 200 metres, twice while wearing the exosuit and twice without it. The researchers found that the participants ran 0.97 seconds faster, on average, when wearing the suit.”

TRANSPORTATION

The Air Force’s Big New Electric Taxi Flies at 200 MPH
Rob Verger | Popular Science
“Joby has been testing and developing electric aircraft for years; it flew a ‘subscale demonstrator,’ or small version of the plane, back in 2015. The full-sized aircraft that Joby has delivered to the Air Force is the first production prototype to come off the company’s line in Marina, California, in June. ‘It’s massive’ as a moment, JoeBen Bevirt, the company’s CEO, tells PopSci. ‘This is like a dream come true.’i

AUTOMATION

FedEx’s New Robot Loads Delivery Trucks Like It’s Playing 3D Tetris
Will Knight | Wired
“Better algorithms, new approaches to using machine learning for robots, and improved hardware and sensors have all started to open more commercial applications for advanced robots. ‘In the last year or two, people have taken advances in AI and machine learning and said ‘we can make a real business case here, whether it’s lowering costs or improving efficiency or whatever,’ says Matthew Johnson-Roberson, director of the robotics institute at Carnegie Mellon University.”

FUTURE

So Much for ‘Learn to Code’
Kelli María Korducki | The Atlantic
“In the age of AI, computer science is no longer the safe major. …The potential decline of ‘learn to code’ doesn’t mean that the technologists are doomed to become the authors of their own obsolescence, nor that the English majors were right all along (I wish). Rather, the turmoil presented by AI could signal that exactly what students decide to major in is less important than an ability to think conceptually about the various problems that technology could help us solve.”

LAW AND ETHICS

Hollywood Writers Reached an AI Deal That Will Rewrite History
Will Bedingfield | Wired
In short, the contract stipulates that AI can’t be used to write or rewrite any scripts or treatments, ensures that studios will disclose if any material given to writers is AI-generated, and protects writers from having their scripts used to train AI without their say-so. …At a time when people in many professions fear that generative AI is coming for their jobs, the WGA’s new contract has the potential to be precedent-setting, not just in Hollywood, where the actors’ strike continues, but in industries across the US and the world.”

Image Credit: Maxime Lebrun / Unsplash

These Bacteria Eat Plastic Waste—and Then Transform It Into Useful Products

0

The first time I heard about the Great Pacific Garbage Patch, I thought it was a bad joke.

My incredulity soon turned to horror when I realized it’s real. The garbage patch, also known as the Pacific trash vortex, is a massive collection of debris in the North Pacific Ocean. Although made up of all sorts of human-generated waste, the main components are tiny pieces of microplastic.

From straws to trash bags, we use an astonishing amount of plastic—which often ends up in delicate ocean (and other) ecosystems. According to the Center for Biological Diversity, a nonprofit organization for protecting endangered species based in Arizona, at current rates plastic is set to outweigh all fish in the ocean by 2050.

A new study wants to turn the tide with synthetic biology. By engineering genetic circuits into a bacterial “consortium,” the team reprogrammed two strains to not only destroy polluting plastics—but to also transform the toxic waste into useful biodegradable material. Environmentally friendly and versatile, these upcycled plastics can be used to manufacture foams, adhesives, or even nylon—all without further taxing the environment.

The strategy isn’t just limited to the polyethylene terephthalate (PET)—one of the most common types of plastics—tested in the study, said the authors. “The underlying concept and strategies are potentially applicable…to other types of plastics” and could begin lighting the way toward  “a sustainable bioeconomy.”

A Natural Plastic Predator

Plastic helped build modern society. Made of molecular chains called polymers, it’s malleable, versatile, and economical to produce in mass. It’s also notoriously difficult to break down.

To Dr. James Collins at MIT, synthetic biology can help us avoid turning the planet into a plastic wasteland. Pioneers in the engineering of synthetic gene circuits, study authors Collins and bioengineer Dr. Ting Lu at the University of Illinois Urbana-Champaign reasoned that genetically engineered bacteria could tackle the plastic conundrum head-on.

Although toxic to most organisms, plastic is an energy source to certain types of bacteria and fungi. Found in soil, the ocean, and even the guts of animals, these bacteria use specialized enzymes to break down different types of plastic. Enzymes are proteins that trigger or speed up biological processes—helping us digest a hefty meal, for example, or converting food into energy.

Unfortunately, these natural strains are sensitive to temperature and acidity and can often only digest plastic that’s already damaged by UV light or chemicals. Even strains that can break down PET plastics require weeks or months to do so, and can only handle small volumes.

A Synthetic Upgrade

Here’s where synthetic biology shines. Scientists in the field use genetic engineering to give organisms new abilities—for example, bacteria that can produce insulin—or even to build entirely new life forms never before seen in nature.

Prior to the recent study, scientists had mapped out multiple enzymes bacteria use to eat plastic. They tinkered with those metabolic processes by inserting or deleting genetic material—for example, to speed up their plastic-chomping abilities or to add enzymes that transform digested plastic waste into new, greener polymers.

This hasn’t been a smooth operation. Older methods work on single bacterial strains. But when faced with large amounts of debris, the bacteria are often overwhelmed. Broken down pieces of PET build up internally and inhibit the microbes’ metabolism—damaging their health.

Then there’s the technological hump. Upgrading plastic waste into usable products requires complex genetic engineering. To accomplish this, the team explained, they needed to build “advanced designer pathways” linking multiple enzymes to produce the final product. Like directing a genetic symphony, this synthetic upgrade demanded fine-tuning throughout the bacteria’s inner cellular workings—a difficult enough feat when manipulating a single strain.

Still, they wondered, if one strain can’t efficiently do the job. What about teamwork?

A Division of Labor

In nature, we see that multispecies microbial communities work together in plastic biodegradation, said the team. So they expanded the bacterial workforce from one synthetic strain to a simple ecosystem of two.

At the heart of this ecosystem is a division of labor. PET breaks down into two main components—terephthalic acid and ethylene glycol—with massively different properties. Mixed food sources are the microbes’ Achilles heel: They’re awful metabolic multitaskers, with pathways for degrading one molecule often suppressing those of another.

Here, the team built their dynamic duo from two strains of Pseudomonas putida, a Cheetos-shaped bacteria often found in polluted water and soil. One strain had a taste for terephthalic acid, the other for ethylene glycol. This particular type of bacteria is a darling in biodegradation research, as it naturally digests aromatic molecules such as styrene, which is widely used to make plastics and rubber. It’s also easy to genetically manipulate and can adapt to new metabolic pathways, making the strain a perfect starting point.

In each natural strain, the team deleted genes involved in metabolizing either terephthalic acid or ethylene glycol and added genes that allowed them to consume the other component.

The result was a bacterial tag-team. Each highly efficient at eating their respective plastic waste product, they also collaborated well when cultured together—neither strain inhibited the other’s diet. Both stuck to their own food source and happily thrived alongside each other.

As a comparison, the team also engineered a multitasker strain that eats both plastic byproducts. Compared to the specialized tag-team, the single strain took far longer to digest the waste both individually and when given as a mix.

Trash to Treasure

Now that they had their bacteria fully digesting plastic waste, the team next integrated several genes to transform it into new materials.

First, they rewired both strains to produce a highly promising biodegradable polymer. The strategy worked exceptionally well. In one test lasting over four days, the two strains pumped out the desired polymer at a far higher rate than the single strain—producing up to 92 percent more of it as a result.

In another test, the system efficiently produced a chemical often used to synthesize plastic and nylon—one that’s notoriously difficult for single strains to upcycle using plastic waste. All it took was a few genetic swaps, and the division of labor readily generated the target chemical.

The idea of upcycling plastic waste isn’t new. In the past, scientists have used heat, force, and chemicals to break down waste and rebuild it into usable material. Bioconversion offers a new, cleaner, more efficient path. All reactions take place inside the microbes, linking waste degradation directly to the desired product in one step. Microbes are also easy to culture in industrial-sized vats, making it possible to scale plastic upcycling.

The study advances this vision of bio-upcycling by making the process more efficient.

A key insight of the study, the team said, is that a division of labor is especially important for fine-tuning the PET upcycling process. As the tools further develop, they believe synthetic bacterial ecosystems could be used to tackle other plastic pollutants and waste too.

Image Credit: Marc Newberry / Unsplash

Scientists Crack How Gravity Affects Antimatter: What That Means for Our Understanding of the Universe

0

A substance called antimatter is at the heart of one of the greatest mysteries of the universe. We know that every particle has an antimatter companion that is virtually identical to itself, but with the opposite charge. When a particle and its antiparticle meet, they annihilate each other—disappearing in a burst of light.

Our current understanding of physics predicts that equal quantities of matter and antimatter should have been created during the formation of the universe. But this doesn’t seem to have happened as it would have resulted in all particles annihilating right away.

Instead, there’s plenty of matter around us, yet very little antimatter—even deep in space. This enigma has led to a grand search to find flaws in the theory or otherwise explain the missing antimatter.

One such approach has focused on gravity. Perhaps antimatter behaves differently under gravity, being pulled in the opposite direction to matter? If so, we might simply be in a part of the universe from which it is impossible to observe the antimatter.

A new study, published by my team in Nature, reveals how antimatter actually behaves under the influence of gravity.

Other approaches to the question of why we observe more matter than antimatter span numerous sub-fields in physics. These range from astrophysics—aiming to observe and predict the behavior of antimatter in the cosmos with experiments—to high-energy particle physics, investigating the processes and fundamental particles that form antimatter and govern their lifetime.

While slight differences have been observed in the lifetime of some antimatter particles compared to their matter counterparts, these results are still far from a sufficient explanation of the asymmetry.

The physical properties of antihydrogen—an atom composed of an antimatter electron (the positron) bound to an antimatter proton (antiproton)—are expected to be exactly the same as those of hydrogen. In addition to possessing the same chemical properties as hydrogen, such as color and energy, we also expect that antihydrogen should behave the same in a gravitational field.

The so-called “weak equivalence principle” in the theory of general relativity states that the motion of bodies in a gravitational field is independent of their composition. This essentially says that what something is made of doesn’t affect how gravity influences its movements.

This prediction has been tested to extremely high accuracy for gravitational forces with a variety of matter particles, but never directly on the motion of antimatter.

Even with matter particles, gravity stands apart from other physical theories, in that is has yet to be unified with the theories that describe antimatter. Any observed difference with antimatter gravitation may help shed light on both issues.

To date, there have been no direct measurements on the gravitational motion of antimatter. It is quite challenging to study because gravity is the weakest force.

That means it is difficult to distinguish the effects of gravity from other external influences. It has only been with recent advances in techniques to produce stable (long-lived), neutral, and cold antimatter that measurements have become feasible.

Trapped Antimatter

Our work took place at the ALPHA-g experiment at Cern, the world’s largest particle physics lab, based in Switzerland, which was designed to test the effects of gravity by containing antihydrogen in a vertical, two-meter-tall trap. Antihydrogen is created in the trap by combining its antimatter constituents: the positron and the antiproton.

The ALPHA-g apparatus being installed in 2018.
The ALPHA-g apparatus being installed in 2018. Image Credit: William Bertsche / University of Manchester, CC BY-SA

Positrons are readily produced by some radioactive materials—we used radioactive table salt. To create cold antiprotons, however, we had to use immense particle accelerators and a unique decelerating facility that operates at Cern.

Both ingredients are electrically charged and can be trapped and stored independently as antimatter in special devices called Penning traps, which consist of electric and magnetic fields.

Anti-atoms, however, are not confined by the Penning traps, and so we had an additional device called a “magnet bottle trap,” which confined the anti-atoms. This trap was created by magnetic fields generated by numerous superconducting magnets.

These were operated to control the relative strengths of the different sides of the bottle. Notably, if we weakened the top and bottom of the bottle, the atoms would be able to leave the trap under the influence of gravity.

We counted how many anti-atoms escaped upwards and downwards by detecting the antimatter annihilations created as the anti-atoms collided with surrounding matter particles in the trap. By comparing these results against detailed computer models of this process in normal hydrogen atoms, we were able to infer the effect of gravity on the anti-hydrogen atoms.

Our results are the first from the ALPHA-g experiment and the first direct measurement of antimatter’s motion in a gravitational field. They show that antihydrogen gravitation is the same as that of hydrogen, it falls downwards rather than rising, within the uncertainty limits of the experiment.

This means our research has empirically ruled out a number of historical theories involving so-called “anti-gravity” suggesting that antimatter would gravitate in exactly the opposite direction as normal matter.

The current measurement is an important milestone for the experiment. Future work in the ALPHA-g experiment will improve its precision through better characterization and control of important aspects of the experiment, such as the traps and the atom cooling systems.

There’s still ample room to find new results than can help explain matter-antimatter asymmetry. Physics is meant to describe observed reality, and there can always be surprises in the way the world works.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Keyi “Onyx” Li/U.S. National Science Foundation

Have We Already Recorded Proof of Alien Civilizations? There’s Only One Way to Know for Sure

0

My college laptop was slow. It didn’t help that the internet was too. Neither fact distracted me from two crucial tasks: downloading music and searching for aliens. The former was a study in patience—tracks spooled out at glacial speeds—the latter a (lazy) labor of love. Scientists had the genius idea of parceling out astronomical data to laptops where a screen saver could comb through them for alien radio signals.

I’m sad to report: None found.

But a lot has changed since then. Computers are faster, software is smarter, and the amount of astronomical data—across the spectrum not to mention gravitational waves—has exploded. It’s worth asking: If the data was too much for astronomers to process years ago, what potentially revolutionary signals have we missed since then?

In a recently released report, a team of Caltech and NASA Jet Propulsion Laboratory astronomers, led by Joseph Lazio, George Djorgovski, Curt Cutler, and Andrew Howard, argue we can’t know for sure unless we change our search strategy to match the times.

Whereas the search for extraterrestrial intelligence (SETI) has been focused on the detection of radio signals—think Jodie Foster with a pair to headphones in the movie Contact—we’ve since recorded an abundance of data from across the sky and developed tools that can comb it for subtle outliers, from radio signals to unusually bright or flickering objects.

“Ten, twenty years ago, we didn’t have this explosion of artificial intelligence and computation technologies,” Anamaria Berea, a computational social scientist at George Mason University not involved in the project, told Wired. “Now they can be used also for archived data.”

The idea is two-fold: First, let’s widen the search from primarily radio signals to all technosignatures—that is, any telltale signs of technological civilizations, intended or not, from advanced communications to megastructures. Second, let’s search for those technosignatures in all current and future observations by training algorithms to spot aberrations and outliers in the data.

A key benefit of such an approach is we “let the data tell us what is in the data,” the team writes. Instead of plastering our own biases on the search, we can simply look for anything weird and then take a closer look to figure out why it’s different.

At the beginning of the last century, the team say, Marconi, Tesla, and Edison all believed they’d detected radio signals from Mars. They were smart, and wrong. Their judgement was clouded by scientific and technological limits—they didn’t know signals in the band detected couldn’t get through Earth’s atmosphere—and cultural biases—there was a strong popular interest in Mars at the time.

SETI, constrained by resources and availability of data, has suffered biases too. Astronomers could only do so many searches on a limited range of instruments, so they had to decide which lines of inquiry were most valuable. Assumptions have commonly included the idea technological civilizations would choose to signal others civilizations “using mid-20th century technology” coded in ways we would understand.

“Given the diversity of human cultures, including the existence of ancient and medieval documents that have not yet been deciphered or translated, there is reason to doubt the likely success of such heavily biased approaches,” the team says.

The new report doesn’t dismiss these approaches—radio signals are still a great way to hunt down aliens, and we’ve only scratched the surface—but the report also suggests new data allows us to widen our search, and new tools can help us reduce inherent anthropocentrism.

What technosignatures—intended or otherwise—might we keep an eye out for? Beyond radio signals, the report digs into the likes of lasers, megastructures, modulated quasars, and probes in orbit around our sun or sitting unnoticed on the surface of moons or planets.

The Wide-Field Infrared Survey Explorer (WISE) space telescope, for example, completed a detailed all-sky survey in infrared wavelengths ideal for searching out the theoretical heat signatures of Dyson spheres. Scientists have long proposed advanced civilizations might choose to surround their home stars with these megastructures to harvest energy.

Of course, this isn’t the first time anyone’s thought of using AI in astronomy. On the contrary, AI has a long history classifying galaxies and picking out exoplanets. Scientists recently used it to sharpen the first-ever image of a black hole. SETI has also employed machine learning in its search for radio signals. The new idea here is to comb through everything we’ve got—even when we don’t know what we’re looking for.

The standard disclaimers apply: AI is subject to bias too. In this case, it’s only as good as the assumptions of its designers and the data it’s fed. Careful preparation of information is crucial, alongside the deployment and testing of multiple models, the team writes.

Still, astronomers will have the final say, reviewing whatever outliers the models spit out. These may be naturally caused by some new phenomena, which is still of value, or if we’re lucky, they could be the signature of another civilization. Win-win.

Future sky surveys will only add to the pile of sky-wide data to crunch. The Vera C. Rubin Observatory in Chile will observe billions of objects in our galaxy through time. And broader searches for biosignatures—proof of any life, no matter how simple—are getting heated as the James Webb and future telescopes begin to analyze exoplanet atmospheres.

“We now have vast data sets from sky surveys at all wavelengths, covering the sky again and again and again,” said Djorgovski. “We’ve never had so much information about the sky in the past, and we have tools to explore it.”

Image Credit: ESO/S. Brunier

Transgenic Silkworms Spin Spider Silk 6x Tougher Than Kevlar

0

The other day I dove headfirst into a spiderweb while half-asleep inside my camper van.

Screams aside, the logical part of me marveled at how fast a single creepy-crawly had woven such an intricate—and surprisingly bouncy and resilient—web in just a few hours.

Spider silk is a natural wonder. It’s tough and resists damage but is also highly flexible. Light, strong, and biodegradable, the silk can be used for anything from surgical sutures to bulletproof vests.

Why wouldn’t we produce more of these silks for human consumption? Spiders are terrible biological manufacturing machines. Creepy factor aside, they’re very combative—put a few hundred together, and you’ll soon be left with a handful of victors and very little product.

Thanks to genetic engineering, however, we may now have a way to skip spiders altogether in the manufacturing of spider silk.

In a study published last week, a team from Donghua University in China used CRISPR to create genetically engineered silkworms that can produce spider silk. The resulting strands are tougher than Kevlar—a synthetic component used in bulletproof vests. Compared to synthetic materials, such spider silk is a far more biodegradable alternative that may be easily scaled for production.

Dr. Justin Jones at Utah State University, who was not involved in the study, gave the new weave a nod of approval. The resulting material is “a really high-performance fiber,” he said to Science.

Meanwhile, to the authors, their strategy isn’t limited to spider silk. The study uncovered several biophysical principles for building silk materials with exceptional strength and flexibility.

Further experimentation could potentially yield next-generation textiles beyond current capabilities.

Of Worms, Arthropods, and History

Nature offers a wealth of inspiration for cutting-edge materials.

Take Velcro, the hook-and-loop material that may be hanging your bathroom towels or securing your kid’s shoes. The ubiquitous material was first conceived by Swiss engineer George de Mestral in the 1940s when trying to brush burrs off his pants after a hike. A further look under the microscope showed the burrs had sharp hooks which snagged loops in the fabric. De Mestral turned the hiking nuisance into the hook-and-loop fabric available in all hardware stores today.

A less prickly example is silk. First cultured by ancient China roughly 5,000 years ago, silk is spun from wriggly, rotund silkworms and spun into fabrics using primitive looms. These delicate silks spread throughout East Asia and to the west, helping establish the legendary Silk Road.

Yet as anyone who’s owned a silk garment or sheets will know, these are incredibly delicate materials that easily rip and break down.

The challenges we face with silkworm silk are shared by most materials.

One problem is strength: how much stretching a material can handle over time. Imagine yanking a slightly shrunken sweater after washing. The less strength the fibers have, the less likely the garment will hold its shape. The other problem is toughness. Simply put, it’s how much energy a material can absorb before breaking down. An old sweater will easily spring holes with just a tug. On the other hand, Kevlar, a bulletproof material, can literally take bullets.

Unfortunately, the two properties are mutually exclusive in today’s engineered materials, said the team.

Nature, however, has a solution: spider silk is both strong and tough. The problem is wrangling the arthropods to produce silk in a safe and effective environment. These animals are vicious predators. A hundred silkworms in captivity can cuddle in peace; throw a hundred spiders together and you get a bloodbath out of which only one or two remains alive.

A Spider-Worm Womb

What if we could combine the best of silkworms and spiders?

Scientists have long wanted to engineer a “meet-cute” date for the two species with the help of genetic engineering. No, it’s not a cross-species rom-com. The main idea is to genetically endow silkworms with the ability to produce spider silk.

But the genes encoding spider silk proteins are large. This makes them tough to jam into the genetic code of other creatures without overwhelming natural cells and causing them to fail.

Here, the team first used a computational method to hunt down the minimal structure of silk. The resulting model mapped silk protein differences between silkworms and spiders. Luckily, both species spin fibers out of similar protein structures—called polyamide fibers—though each is based on different protein components.

Another bit of luck is shared anatomy. “The silk glands of domestic silkworms and spider silk glands exhibit remarkably similar” physical and chemical environments, said the team.

Using the model, they identified a critical component that boosts silk strength and toughness—a relatively small silk protein, MiSp, found in Araneus ventricosus spiders from East Asia.

With CRISPR-Cas9, a gene editing tool, the team then added genes coding for MiSp into silkworms—essentially rejiggering them to spin spider silk. Accomplishing this was a technological nightmare, requiring hundreds of thousands of microinjections into fertilized silkworm eggs to edit their silk-spinning glands. As a sanity check, the team also added a gene that made the silkworms’ eyes glow hauntingly red, which signaled success.

Study author Junpeng Mi “danced and practically ran to” the lead author, Dr. Meng Qing’s office. “I remember that night vividly, as the excitement kept me awake,” said Mi.

The resulting worm-spider silks are roughly six times tougher than Kevlar but still flexible. It’s surprising, said Jones, because fibers using MiSp aren’t always stretchy. As a bonus, the silkworms also naturally sprayed a sort of protective coating to strengthen the fibers. This made them potentially more durable than previous artificially made spider silk. ­­

The team is further exploring their computational model to design biologically compatible silk for medical sutures. Beyond that they hope to get more creative. Synthetic biologists have long wanted to develop artificial amino acids (the molecular pieces that make up proteins). What would happen if we added synthetic amino acids to biodegradable fabrics?

“The introduction of over one hundred engineered amino acids holds boundless potential for engineered spider silk fibers,” said Mi.

Image Credit: Junpeng Mi, College of Biological Science and Medical Engineering, Donghua University, Shanghai, China

You’ll Soon Be Able to Buy Genetically Engineered Glow-in-the-Dark Petunias

0

Biotechnology is often advertised as the solution to world hunger or a way to cure disease. But new glow-in-the-dark house plants suggest it can have more whimsical uses too.

Our rapidly improving ability to edit the genetic code is opening doors to all kinds of groundbreaking new possibilities. From tackling malnutrition with vitamin fortified rice to re-engineering the body’s immune cells to fight cancer, the technology is being put to use tackling some of the world’s most pressing problems.

But while most efforts to take advantage of advances in biotechnology are decidedly practical, others are driven by more aesthetic goals. Since its founding in 2019, Idaho-based startup Light Bio has been using genetic engineering to splice genes from a bioluminescent mushroom into the humble petunia.

The goal is to create glow-in-the-dark house plants that could help turn people’s homes into a scene from the movie Avatar, according to a report in Wired. And earlier this month, the company finally got permission from the US Department of Agriculture to sell its neon green petunias in the US.

Bioluminescence isn’t a particularly rare trick in nature: certain bacteria, fish, amphibians, insects, and even worms can glow. But getting plants to do it is harder than it might seem. While groups have previously managed to create luminous foliage by inserting genes from fireflies or bacteria into plants, they either didn’t glow very brightly or needed chemical treatments to light up.

The key, it turns out, is adding genetic components that integrate well with the host. Light Bio says they’ve managed to do this by borrowing a metabolic pathway found in mushrooms that produces luciferin—the molecule responsible for bioluminescence. The pathway relies on a molecule known as caffeic acid, which is already found in significant quantities in plants where it is used to build cell walls.

The company’s product is based on research carried out by cofounder Karen Sarkisyan, a professor at Imperial College London. In 2020, his team borrowed DNA from the tropical mushroom Neonothopanus nambi and inserted it into tobacco plants, a popular model organism for plant research. The leaves, stems, roots, and flowers of the engineered foliage glowed bright green.

Now they’ve applied the same idea to petunias, which they plan to start selling next year. The company already has 10,000 people on their waiting list, and they also hope to expand into other popular ornamental species.

In determining whether the company could sell its plants, the USDA only considered whether the modified petunias were more likely to risk encouraging more plant pests compared to regularly cultivated petunias. But Jennifer Kuzma, codirector of the Genetic Engineering and Society Center at North Carolina State University, told Wired the plants could pose other ecological risks by impacting insects and animals not accustomed to glowing plants.

Given the glow-in-the-dark plants are primarily designed to be decorative, whether those risks are worth taking could be up for debate. But there could also be more practical uses for plants that can emit light. In 2021, researchers at MIT combined genetically engineered bioluminescent plants with energy storing nanoparticles to significantly amp up their brightness. The group claimed the light-emitting plants could one day be used to light building interiors without electricity.

And bioluminescence isn’t the only trick that genetic engineers want to imbue our house plants with. French startup Neoplants has modified the ubiquitous golden pothos to enhance its ability to scrub various pollutants out of the air. The goal is to replace electrically powered air purifiers—though you might need a lot of them.

Whether either of these Franken-plants can become more than a novelty remains to be seen. But they do show that as the biotechnology industry matures gene-splicing could soon be popping up in all kinds of products both functional and fun.

Image Credit: Light Bio

This Week’s Awesome Tech Stories From Around the Web (Through September 23)

0

ARTIFICIAL INTELLIGENCE

OpenAI’s Dall-E 3 Is an Art Generator Powered by ChatGPT
Will Knight | Wired
“Dall-E 3 will…let users refine a creation through ChatGPT, as if they were asking a real artist to make changes. ‘You won’t really have to worry about fussing around with really long prompts,’ says Aditya Ramesh, lead researcher and head of the Dall-E team. ‘Instead, you can just interact with ChatGPT as if you were talking to a coworker.'”

AUTOMATION

Enough Talk, ChatGPT—My New Chatbot Friend Can Get Things Done
Will Knight | Wired
“After my own test drive of Auto-GPT, I could understand Crivello’s conviction about the future. If the errors can be ironed out—a fairly big if—I can imagine a future where AI agents help with a lot of chores that currently involve typing into a web browser or moving and clicking a mouse. As with ChatGPT, when Auto-GPT works, it can feel like magic.”

HEALTH

How Inverse Vaccines Might Tackle Diseases Like Multiple Sclerosis
Cassandra Willyard | MIT Technology Review
If researchers can get these vaccines to work, the payoff could be enormous. Many people with autoimmune issues take immunosuppressive drugs that dampen the entire immune system, making them more vulnerable to infections and cancer. But a vaccine that tamps down the immune response to a specific antigen wouldn’t have that effect. ‘It’s a field where a lot of people want to make the breakthrough and become the next chapter in the glorious history of vaccination,’ Steinman says.”

ENERGY

Looking to Space in the Race to Decarbonize
Nell Gallogly | The New York Times
A 1980 review by NASA concluded that the first gigawatt of space-based solar power (enough energy to power 100 million LED bulbs) would cost more than $20 billion ($100 billion today). By 1997, NASA estimated that that number had dropped to about $7 billion ($15 billion today); now, it is estimated to be closer to $5 billion, according to a study conducted for the European Space Agency in 2022.”

COMPUTING

The Signal Protocol Used by 1+ Billion People Is Getting a Post-Quantum Makeover
Dan Goodin | Ars Technica
“[Though estimates of when it will happen vary], there is little disagreement…there will come a day when many of the most widely used forms of encryption will die at the hands of quantum computing. To head off that doomsday eventuality, engineers and mathematicians have been developing a new class of PQC, short for post-quantum cryptography. The PQC added to the Signal Protocol on Monday is called PQXDH.”

TRANSPORTATION

It’s Still Tesla’s World
Andrew Moseman | The Atlantic
“Tesla, which only sells EVs, is already profitable, even as it has slashed vehicle prices multiple times this year. In the decade-plus since it launched the Model S, Tesla has cemented that advantage with a business model that Detroit still struggles to match.”

DIGITAL MEDIA

Explore the Ancient Aztec Capital in This Lifelike 3D Rendering
Anna Lagos | Wired
“Digital artist Thomas Kole, originally from Amersfoort, Netherlands, has re-created the capital of the Aztec, or Mexica, empire with so much detail that it looks like a living metropolis. ‘What did the ancient, enormous city built atop a lake look like?’ wondered Kole, as he explored Mexico City on Google Maps. …For a year and a half, he turned to historical and archaeological sources as he sought to bring Tenochtitlán back to life while remaining as faithful as possible to what we know about the city.”

TECH

Google’s Bard Just Got More Powerful. It’s Still Erratic.
Kevin Roose | The New York Times
This week, Bard—Google’s competitor to ChatGPT—got an upgrade. One interesting new feature, called Bard Extensions, allows the artificial intelligence chatbot to connect to a user’s Gmail, Google Docs and Google Drive accounts. …I put the upgraded Bard through its paces on Tuesday, hoping to discover a powerful AI assistant with new and improved abilities. What I found was a bit of a mess.”

SCIENCE

The Animals Are Talking. What Does It Mean?
Sonia Shah | The New York Times Magazine
“For centuries, the linguistic utterances of Homo sapiens have been positioned as unique in nature, justifying our dominion over other species and shrouding the evolution of language in mystery. Now, experts in linguistics, biology and cognitive science suspect that components of language might be shared across species, illuminating the inner lives of animals in ways that could help stitch language into their evolutionary history—and our own.”

TECH

How to Avoid ‘Death by LLM’
Tim Cooper | BigThink
All organizations pivoting to generative AI need to ask whether their model hits key benchmarks. If the cost of adopting AI is prohibitive, companies can make incremental changes. Experimentation with enterprise AI is crucial; not all features result in long-term use and retention.

Image Credit: Fraser Tull / Unsplash

The Limits of Our Personal Experience and the Value of Statistics

0

It’s tempting to believe that we can simply rely on personal experience to develop our understanding of the world. But that’s a mistake. The world is large, and we can experience only very little of it personally. To see what the world is like, we need to rely on other means: carefully-collected global statistics.

Of course, our personal interactions are part of what informs our worldview. We piece together a picture of the lives of others around us from our interactions with them. Every time we meet people and hear about their lives, we add one more perspective to our worldview. This is a great way to see the world and expand our understanding, I don’t want to suggest otherwise. But I want to remind ourselves how little we can learn about our society through personal interactions alone, and how valuable statistics are in helping us build the rest of the picture.

The Horizon of Our Personal Experience

How many people do you know personally?

Let’s take a broad definition of what it means to know someone and say that we include everyone you know by name. A study in the US asked how many people Americans know by name and found that the average person knows 611.

Let’s assume you are more social than the average American, and you know 800 people. In a world of 8 billion, this means that you know 0.00001% of the population. A 100,000th of a percent.

It’s hard to visualize how small a fraction this is. If this illustration was drawn to scale, then the blue rectangle—which represents the world population—would be as large as a sheet of printer paper, while the yellow square—which represents the number that a person knows—would have the diameter of a human hair.

This is why I’m very skeptical when people say things about “the world these days” based on what they hear from people they know.

We cannot see much of the world through our direct experience. The horizon of our personal experience is very narrow. For every person you know, there are ten million people you do not know.

And chances are that the people you know are quite similar to you, far from representative of the world—or your country—as a whole.

How Wide Can the Horizon of Our Personal Experience Be?

Perhaps you think restricting the people you learn from to the number of people you know by name is too narrow. After all, you also learn from strangers you meet, even if you don’t get to know their names.

Let’s assume you are exceptionally good at this, and you have a conversation with three new people every single day of your life.

If you can keep this up for 73 years, you will get to know 80,000 people. That’s more than a hundred times the number of people you’d know by name.

This is still a tiny fraction of the world. After a lifetime of speaking with people, you will have spoken to 0.001% of the world’s population. For every person you’ve had a conversation with, there are still 100,000 people you’ve never spoken to.

Drawn to scale, the orange square, which represents the number of people you could ever speak to, would be less than a millimeter (0.8mm) wide.

I am focusing on personal interactions as the most direct and in-depth way to learn about others, but they are not the only experiences by which we learn about others. We also learn by seeing other peoples’ clothes, by seeing their houses, or by hearing others talk about their personal experiences. But while these experiences also help, they still don’t get us very far. The world is large, and even if you are exceptionally attentive and exceptionally good at making connections and speaking to people, it is simply impossible to see much of the world directly.

The Fragmented Perspective of the News Media: Some Spotlights on Particular People, But Much of the World Is Left in Darkness

The limits of our personal experience don’t reach far beyond ourselves. How can we know about the world if we want to see beyond this tight horizon?

In one way or another, we have to rely on the media. Whether it is television or radio, the newspaper or photography, books, podcasts, documentaries, research papers, statistical tables, or social media.

This fact is so obvious that it is easy to miss how important it is: everything you hear about anyone who is more than a few dozen meters away, you know through some form of media.

That’s why the media we choose to rely on is so important for our understanding of the world.

The news is the media that shapes our picture of the world more than any other. Today, it’s often intertwined with social media. It is valuable as it lets us see beyond our own tight horizon, but the view the news offers is a spotted and fragmented one.

The news reports on the unusual things that happen on a particular day, but the things that happen every day never get mentioned. This gives us a biased and incomplete picture of the world; we are inundated with detailed news on terrorism but hardly ever hear of everyday tragedies like the fact that 16,000 children die every single day.

The illustration below visualizes this fragmented view. The news focuses on exceptionally powerful people or on those who experienced unusual tragedies. But while it puts the spotlight on those few people, it leaves most of the world in darkness.

The problem is not so much what the news media covers but what it does not cover. Those left in darkness are often poor and powerless and geographically far from us. What we see in the news is not nearly enough to understand the world we are living in.

What Is Missing: Everyone Else—for This, We Need Global Data

Of course, it is challenging to hear about everyone. But that’s the challenge we have to take on if we don’t want to be left with a scattered and biased perspective of the world.

If we want to see what on earth is going on, we have to tell all the stories. This is possible. Telling many stories at once is statistics.

Statistical methods make it possible to draw reliable conclusions about a population as a whole. Statistics is an extraordinary cultural achievement that allows us to broaden our view, from the individual stories of those in the spotlight to a perspective that includes everyone.

Global economic data can tell us about the incomes of everyone on this planet. Global health data aim to tell us about everyone’s cause of death. And similarly, we can learn about everyone who lacks access to basic electricity, everyone who lacks access to clean drinking water, and everyone who lacks access to basic sanitation.

Global statistics don’t only allow us to see what the “world these days” looks like, but also how it has changed. Statistics that document how the world has changed are often very surprising to those who mostly rely on the news to understand the world. While the news focuses overwhelmingly on all the things that are going wrong, historical statistics allow us to also see what has gone right—the immense progress the world has achieved.

Statistics can illuminate the world in a way that our personal experiences and the news media can’t. This is why my colleagues and I at Our World in Data rely on global statistics to understand how the world is changing.

The visual below illustrates what carefully-collected global statistics make possible: they illuminate the entire world around us, and allow us to see what is happening to everyone.

No Data Is Perfect

The collection and production of good statistics is a major challenge. Data might be unrepresentative in some ways, it might be mismeasured, and some data might be missing entirely. Everyone who relies on statistics to form their worldview needs to be aware of these shortcomings.

Our goal at Our World in Data is to present the best available data and at the same time highlight their shortcomings. The most important work is done by the statisticians who collect and publish the global databases in the first place. Our role is to make their work accessible and understandable. To achieve this goal we speak with experts, read the scientific literature, and analyze the available data so that we can highlight the best available statistics and highlight the shortcomings that even the best data is associated with.

A Statistical Understanding of the World Needs to Become Much More Central to Our Culture

I don’t want to suggest that it is a bad idea to rely on personal experience or the news to learn about the world. Each way of learning about the world has its value. It’s about how we bring them together: the in-depth understanding that only personal interaction can give us, the focus on the powerful and unusual that the news offers, and the statistical view that gives us the opportunity to see everyone.

We have many ways of learning about the world and we should make use of all of them. A statistical view without personal experience lacks depth, and personal experience without statistical knowledge lacks perspective.

The problem is that most of our focus goes toward personal experience and the news. They are held in high regard, while statistics are left to a small corner of our culture. This is not where they belong. A society that has the aspiration to care for everyone needs to bring a statistical understanding of the world into the center of its culture.

For this, we need to remember what statistical figures really mean. Spreadsheets are not just numbers, they tell us about the reality of the people around us and allow us to see what is happening to everyone, all at once.

This article was originally published on Our World in Data and has been republished here under a Creative Commons license. Read the original article.

Image Credit: NASA

DeepMind AI Hunts Down the DNA Mutations Behind Genetic Disease

0

Proteins are like Spider-Man in the multiverse.

The underlying story is the same: each building block of a protein is based on a three-letter DNA code. However, change one letter, and the same protein becomes a different version of itself. If we’re lucky, some of these mutants can still perform their normal functions.

When we’re unlucky, a single DNA letter change triggers a myriad of inherited disorders, such as cystic fibrosis and sickle cell disease. For decades, geneticists have hunted down these disease-causing mutations by examining shared genes in family trees. Once found, gene-editing tools such as CRISPR are beginning to help correct genetic typos and bring life-changing cures.

The problem? There are more than 70 million possible DNA letter swaps in the human genome. Even with the advent of high-throughput DNA sequencing, scientists have painstakingly uncovered only a sliver of potential mutations linked to diseases.

This week, Google DeepMind brought a new tool to the table: AlphaMissense. Based on AlphaFold, their blockbuster algorithm for predicting protein structures, the new algorithm analyzes DNA sequences and works out which DNA letter swaps likely lead to disease.

The tool only focuses on single DNA letter changes called “missense mutations.” In several tests, it categorized 89 percent of the tens of millions of possible genetic typos as either benign or pathogenic, said DeepMind.

AlphaMissense expands DeepMind’s work in biology. Rather than focusing only on protein structure, the new tool goes straight to the source code—DNA. Just a tenth of a percent of missense mutations in human DNA have been mapped using classic lab tactics. AlphaMissense opens a new genetic universe in which scientists can explore targets for inherited diseases.

“This knowledge is crucial to faster diagnosis” wrote the authors in a blog post, and to get to the “root cause of disease.”

For now, the company is only releasing the catalog of AlphaMissense predictions, rather than the code itself. They also warn the algorithm isn’t meant for diagnoses. Rather, it should be viewed more like a tip-line for disease-causing mutations. Scientists will have to examine and validate each tip using biological samples.

“Ultimately, we hope that AlphaMissense, together with other tools, will allow researchers to better understand diseases and develop new life-saving treatments,” said study authors Žiga Avsec and Jun Cheng at DeepMind.

Let’s Talk Proteins

A quick intro to proteins. These molecules are made from genetic instructions in our DNA represented by four letters: A, T, C, and G. Combining three of these letters codes for a protein’s basic building block—an amino acid. Proteins are made up of 20 different types of amino acids.

Evolution programmed redundancy into the DNA-to-protein translation process. Multiple three-digit DNA codes create the same amino acid. Even if some DNA letters mutate, the body can still build the same proteins and ship them off to their normal workstations without issue.

The problem is when a single letter change bulldozes the entire operation.

Scientists have long known these missense mistakes lead to devastating health consequences. But hunting them down has taken years of tedious work. To do this, scientists manually edit DNA sequences in a suspicious gene—letter by letter—make them into proteins, then observe their biological functions to hunt down the missense mutation. With hundreds of potential suspects, nailing down a single mutation can take years.

Can we speed it up? Enter machine minds.

AI Learning ATCG

DeepMind joins a burgeoning field that uses software to predict disease-causing mutations.

Compared to previous computational methods, AlphaMissense has a leg up. The tool leverages learnings from its predecessor algorithm, AlphaFold. Known for solving protein structure prediction—a grand challenge in the field—AlphaFold is in the algorithmic biology hall-of-fame.

AlphaFold predicts protein structures—which often determine function—based on amino acid sequences alone. Here, AlphaMissense uses AlphaFold’s “intuition” about protein structures to predict whether a mutation is benign or detrimental, study author and DeepMind’s vice president of research Dr. Pushmeet Kohli said at a press briefing.

The AI also leverages the large language model approach. In this way, it’s a little like GPT-4, the AI behind ChatGPT, only rejiggered to decode the language of proteins. These algorithmic editors are great at homing in on protein variants and flagging which sequences are biologically plausible and which aren’t. To Avsec, that’s AlphaMissense’s superpower. It already knows the rules of the protein game—that is, it knows which sequences work and which fail.

As a proof-of-concept, the team used a standardized database of missense variants, called ClinVar, to challenge their AI system. These genetic typos lead to multiple developmental disorders. AlphaMissense bested existing models for nailing down disease-causing mutations.

A Game-Changer?

Predicting protein structures can be useful for stabilizing protein drugs and nailing down other biophysical properties. However, solving structure alone has “generally been of little benefit” when it comes to predicting variants that cause diseases, said the authors.

With AlphaMissense, DeepMind wants to turn the tide.

The team is releasing its entire database of potential disease-causing mutations to the public. Overall, they hunted down 32 percent of all missense variants that likely trigger diseases and 57 percent that are likely benign. The algorithm joins others in the field, such as PrimateAI, first released in 2018 to screen for dangerous mutants.

To be clear: the results are only predictions. Scientists will have to validate these AI-generated leads in lab experiments. AlphaMissense provides “only one piece of evidence,” said Dr. Heidi Rehm at the Broad Institute, who wasn’t involved in the work.

Nevertheless, the AI model has already generated a database that scientists can tap into “as a starting point for designing and interpreting experiments,” said the team.

Moving forward, AlphaMissense will likely have to tackle protein complexes, said Marsh and Teichmann. These sophisticated biological architectures are fundamental to life. Any mutations can crack their delicate structure, cause them to misfunction, and lead to diseases. Dr. David Baker’s lab at the University of Washington—another pioneer in protein structure prediction—has already begun using machine learning to explore these protein cathedrals.

For now, no single tool that predicts disease-causing DNA mutations can be relied on to diagnose genetic diseases, as symptoms often result from both inherited mutations and environmental cues. This applies to AlphaMissense as well. But as the algorithm—and interpretation of its results—advances, its use in the “diagnostic odyssey will continue to improve,” they said.

Image Credit: Google DeepMind / Unsplash

Agility’s New Factory Can Crank Out 10,000 Humanoid Robots a Year

0

Simple robots have long been a manufacturing staple, but more advanced robots—think Boston Dynamics’ Atlas—have mostly been bespoke creations in the lab. That’s begun to change in recent years as four-legged robots like Boston Dynamics’ Spot have gone commercial. Now, it seems, humanoid robots are aiming for mass markets too.

Agility Robotics announced this week it has completed a factory to produce its humanoid Digit robot. The 70,000-square-foot facility, based in Salem, Oregon has a top capacity of 10,000 robots annually. Agility broke ground on the factory last year and plans to begin operations later this year. The first robots will be delivered in 2024 to early customers taking part in the company’s partner program. After that, Agility will open orders to everyone in 2025.

“The opening of our factory marks a pivotal moment in the history of robotics: the beginning of the mass production of commercial humanoid robots,” Damion Shelton, Agility Robotics’ cofounder and CEO said in a press release Tuesday.

The latest version of Digit stands 5 feet 9 inches tall and weighs 141 pounds, according to the company’s website. It has a flat head with a pair of emoji-like eyes, two arms designed to pick up and move totes and boxes, and walks on legs that hinge backwards like a bird’s. The robot can work 16 hours a day and docks itself to a charging station to refuel.

Founded in 2015, Agility Robotics was spun out of Oregon State University’s Robotics Laboratory, where robotics professor and cofounder Jonathan Hurst leads research in legged robots. Little more than a pair of legs, Digit’s direct ancestor Cassie launched in 2017. Agility had added arms and a suite of sensors, including a vaguely head-like lidar unit, by the time it announced the first commercial version of Digit in early 2020.

Back then, Digit was marketed as a delivery robot that would unfold from the back of a van and drop packages on porches. Though its first broad commercial application will instead be moving boxes and totes, the company still views Digit as a multi-purpose platform with other uses ahead.

“I believe dynamic-legged robots will one day help take care of elderly and infirm people in their homes, assist with lifesaving efforts in fires and earthquakes, and deliver packages to our front doors,” Hurst wrote for IEEE Spectrum in 2020.

To go bigger, however, the company will have to prove Digit is widely useful beyond the experimental and then figure out how to make it en masse.

That’s where the new factory, dubbed RoboFab, comes in. To date, robots like Digit are made in the single digits or dozens at most. Atlas is still a research robot—though its slick moves make for viral videos—and a new push into humanoid robots by other players, including the likes of Tesla, Figure, and Sanctuary, is only just getting started.

It would be an impressive achievement if Agility hews to its aggressive timeline.

Apart from building and opening a factory, challenges to scaling include setting up a steady supply chain, nailing consistent product quality beyond a few units, and servicing and supporting customer robots in the field. All that will take time—maybe years. And of course, in order to produce 10,000 robots annually, they have to sell that many too. The company expects the first year’s production to be in the hundreds.

But if Digit proves capable, affordable, and easy for businesses to integrate in the years ahead, it seems likely there would be ample demand for its box-and-tote picking skills. Amazon invested in Agility’s $150-million Series B in 2022 and has been packing its warehouses with robots for years. Digit could fit an unfilled niche in its machine empire.

Further broadening the number of tasks the robot can complete—and thereby widening the market—would likewise boost demand for the bot. Amazon, and others no doubt, would likely be more than willing to entertain the idea of Digit one day delivering packages.

But first, Agility will have to fire up the assembly line and prove they can keep it humming along at a healthy pace.

Image Credit: Agility Robotics

Party Drug MDMA Inches Closer to Breakthrough Approval for PTSD

0

MDMA doesn’t have the best reputation. Known as “ecstasy” or “molly,” the drug is synonymous with rave culture: all-night electronic beats and choreographed laser shows.

Still, it may soon join the psychedelic drug resurgence—not for partying, but for tackling severe mental trauma, such as post-traumatic stress disorder (PTSD).

Last week, Nature Medicine reported a multi-site, randomized, double-blind trial in over 100 patients with PTSD. The drug, combined with therapy, was carefully administered to patients being monitored in doctors’ offices. Compared to patients given the same therapy with a placebo, MDMA was far more effective at dampening PTSD symptoms.

The study, led by the non-profit Multidisciplinary Association for Psychedelic Studies (MAPS), follows an earlier Phase 3 trial—the last stage of clinical testing before regulatory approval. In that trial, participants also received therapy. Roughly twice the number of people given MDMA rather than a placebo recovered from their PTSD diagnosis.

The new, long-awaited study bolsters those earlier results by recruiting a more diverse population and showing that the treatment worked across multiple racial and ethnic groups.

To be very clear: the trials are for MDMA-assisted therapy. The psychotherapy component is key. The team repeatedly warns against seeking out the drug and taking it without supervision.

“What we believe is that the results that we got were not from MDMA,” said MAPS founder Rick Doblin to Nature in an earlier interview. “They were from highly trained therapists who are then using MDMA.”

The Food and Drug Administration generally requires two controlled trials before it considers approving a drug. MAPS has now delivered. The organization plans to seek approval this October. If the results hold up, the US may join Australia in welcoming a previously condemned drug as a new treatment for PTSD.

It won’t be an easy road. Although public and scientific opinions have shifted towards tolerance, MDMA is still listed as a Schedule 1 drug by the DEA. Drugs in this category are deemed to have “no currently accepted medical use and a high potential for abuse,” placing them alongside heroin.

That said, scientists are increasingly taking psychedelics seriously as tools that can help combat difficult mental problems. Also among Schedule 1 drugs are cannabis, psylocibin (from magic mushrooms), and LSD (commonly known as acid). These illicit drugs are gradually being embraced both in the research and clinical spheres as valid candidates for further study.

To Dr. Amy Kruse at the venture capital firm Satori Neuro based in Maryland, who was not involved in either study, “MAPS has been the beacon to kind of take on this work…There are many people that can benefit from this treatment, and I think it shows a pathway for the potential rescheduling of other molecules.”

A Checkered Past

MDMA—an acronym for its chemical name, 3,4-methylenedioxymethamphetamine—didn’t always wear the party drug black hat. It has enthralled psychiatrists since its birth in 1912.

Developed by a German pharmaceutical company to control bleeding, the drug soon caught the eye of mental health professionals. From the 1970s to its complete ban in 1985, thousands of individual reports suggested the drug, delivered in a doctor’s office with therapy, enhanced treatment results. Patients seemed to be able to better express and process their feelings, in turn gaining insights into their own mental states.

However, the drug also leaked out onto the street around the same time, spurring a total ban by the FDA in 1985. Research into its potential for enhancing psychotherapy screeched to a halt. In turn, scientists were left with only individual case reports and anecdotes—hardly sufficient evidence to continue research.

Enter Doblin. Convinced that research into MDMA and other psychedelic drugs shouldn’t be abandoned, he founded MAPS in 1986—a year after the ban. For the next 40 years, his team fought to reestablish the drug as a legitimate candidate for PTSD and depression. Neuroscientists studying drug toxicity was the norm. Treatment potential? Not so much.

Opinions began to shift in the late 2010s. A prominent neuroscientist called the drug “a probe and treatment for social behaviors” in a highly prestigious journal. MDMA regained its 1970s reputation as an “empathogen,” in that it fosters feelings of empathy and closeness. How MDMA triggers those intimate feelings isn’t yet fully understood, but it seems to increase levels of several chemical messengers in the brain, including serotonin, dopamine, and norepinephrine. Lower amounts of these chemicals are often associated with depression.

In 2021, MAPS and Doblin had their first major win in a clinical trial studying 90 people with PTSD undergoing therapy, either with MDMA or a placebo. After three sessions, 67 percent of those receiving MDMA no longer qualified for PTSD diagnosis, compared to just 32 percent of people given placebos.

The new 104-person study bolsters these promising results. Patients attended three 8-hour sessions across roughly 12 weeks. Regardless of ethnicity or race, 71 percent of people given MDMA and therapy were freed of their PTSD diagnosis, compared to 48 percent in the placebo group. MDMA-assisted therapy was also effective in people with other mental disorders, such as depression—an important use case as the two conditions often go hand-in-hand. Most participants experienced mild side effects, such as muscle tightness, feeling hot, or nausea.

Neurologist Dr. Jennifer Mitchell at the University of California, San Francisco, who led both Phase 3 studies, told Nature that the drug acts as a “communications lubricant.” It doesn’t make therapy sessions more fun—participants still have to work through their trauma—but it does help them more readily open up to their therapists, without experiencing shame or trauma.

And these effects are seemingly generalized regardless of ethnicity or race. “In a historic first, to our knowledge, for psychedelic treatment studies, participants who identified as ethnically or racially diverse encompassed approximately half of the study sample,” the team wrote.

A Bright Future?

It’s extremely difficult to blind a psychedelic study. Given MDMA’s potent effects, it’s very clear to patients if they’re high after taking a pill, which could result in bias.

To get around the problem, MAPS developed a special protocol first approved by the FDA in 2017. After each treatment session, the volunteers’ symptoms were measured by psychologists not privy to the experiment design. They’re “blinded” to what group the patient is in and did not administer the drug or therapy.

It’s not a perfect solution. In a post-trial survey, most people given MDMA knew what they were given. To Dr. Erick Turner at the Oregon Health and Science University in Portland, this doesn’t fit the FDA’s definition of “blinding.” Even if the drug is deemed safe and efficient, regulatory agencies will still need to iron out the rules. Because therapy is a key component but not under the FDA’s jurisdiction, the agency has to somehow dissuade people from trying the drug on their own in inconducive, or even dangerous settings.

MDMA has also been linked to bad experiences in people with schizophrenia or other neurological disorders. These terrifying trips aren’t just detrimental to the patient’s mental health—they could also set back the renaissance of psychedelic treatments.

In all, lots of kinks need to be worked out. Given MDMA’s long history, its patent is expired, reducing incentives to develop or manufacture the drug. But with the new study, regulatory approval is inching closer as an alternative for people battling mental demons.

“It’s an important study,” said MDMA researcher Dr. Matthias Liechti at the University of Basel in Switzerland, who wasn’t involved in the trial. “It confirms MDMA works.”

Image Credit: chenspec / Pixabay

Flowering Plants Survived the Dinosaur-Killing Asteroid—and May Outlive Us

0

If you looked up 66 million years ago you might have seen, for a split second, a bright light as a mountain-sized asteroid burned through the atmosphere and smashed into Earth. It was springtime and the literal end of an era, the Mesozoic.

If you somehow survived the initial impact, you would have witnessed the devastation that followed. Raging firestorms, megatsunamis, and a nuclear winter lasting months to years. The 180-million-year reign of non-avian dinosaurs was over in the blink of an eye, as well as at least 75% of the species who shared the planet with them.

Following this event, known as the Cretaceous-Paleogene mass extinction (K-Pg), a new dawn emerged for Earth. Ecosystems bounced back, but the life inhabiting them was different.

Many iconic pre-K-Pg species can only be seen in a museum. The formidable Tyrannosaurus rex, the Velociraptor, and the winged dragons of the Quetzalcoatlus genus could not survive the asteroid and are confined to deep history. But if you take a walk outside and smell the roses, you will be in the presence of ancient lineages that blossomed in the ashes of K-Pg.

Although the living species of roses are not the same ones that shared Earth with Tyrannosaurus rex, their lineage (family Rosaceae) originated tens of millions of years before the asteroid struck.

And the roses are an not unusual angiosperm (flowering plant) lineage in this regard. Fossils and genetic analysis suggest that the vast majority of angiosperm families originated before the asteroid.

Ancestors of the ornamental orchid, magnolia, and passionflower families, grass and potato families, the medicinal daisy family, and the herbal mint family all shared Earth with the dinosaurs. In fact, the explosive evolution of angiosperms into the roughly 290,000 species today may have been facilitated by K-Pg.

Angiosperms seemed to have taken advantage of the fresh start, similar to the early members of our own lineage, the mammals.

However, it’s not clear how they did it. Angiosperms, so fragile compared with dinosaurs, cannot fly or run to escape harsh conditions. They rely on sunlight for their existence, which was blotted out.

What Do We Know?

Fossils in different regions tell different versions of events. It is clear there was high angiosperm turnover (species loss and resurgence) in the Amazon when the asteroid hit, and a decline in plant-eating insects in North America which suggests a loss of food plants. But other regions, such as Patagonia, show no pattern.

A study in 2015 analyzing angiosperm fossils of 257 genera (families typically contain multiple genera) found K-Pg had little effect on extinction rates. But this result is difficult to generalize across the 13,000 angiosperm genera.

My colleague Santiago Ramírez-Barahona, from the Universidad Nacional Autónoma de México, and I took a new approach to solving this confusion in a study we recently published in Biology Letters. We analyzed large angiosperm family trees, which previous work mapped from mutations in DNA sequences from 33,000-73,000 species.

This way of tree-thinking has laid the groundwork for major insights about the evolution of life, since the first family tree was scribbled by Charles Darwin.

Although the family trees we analyzed did not include extinct species, their shape contains clues about how extinction rates changed through time, through the way the branching rate ebbs and flows.

The extinction rate of a lineage, in this case angiosperms, can be estimated using mathematical models. The one we used compared ancestor age with estimates for how many species should be appearing in a family tree according to what we know about the evolution process.

It also compared the number of species in a family tree with estimates of how long it takes for a new species to evolve. This gives us a net diversification rate—how fast new species are appearing, adjusted for the number of species that have disappeared from the lineage.

The model generates time bands, such as a million years, to show how extinction rate varies through time. And the model allows us to identify time periods that had high extinction rates. It can also suggest times in which major shifts in species creation and diversification have occurred as well as when there may have been a mass extinction event. It shows how well the DNA evidence supports these findings too.

We found that extinction rates seem to have been remarkably constant over the last 140-240 million years. This finding highlights how resilient angiosperms have been over hundreds of millions of years.

We cannot ignore the fossil evidence showing that many angiosperm species did disappear around K-Pg, with some locations hit harder than others. But, as our study seems to confirm, the lineages (families and orders) to which species belonged carried on undisturbed, creating life on Earth as we know it.

This is different to how non-avian dinosaurs fared, who disappeared in their entirety: their entire branch was pruned.

Scientists believe angiosperm resilience to the K-Pg mass extinction (why only leaves and branchlets of the angiosperm tree were pruned) may be explained by their ability to adapt. For example, their evolution of new seed-dispersal and pollination mechanisms.

They can also duplicate their entire genome (all of the DNA instructions in an organism) which provides a second copy of every single gene on which selection can act, potentially leading to new forms and greater diversity.

The sixth mass extinction event we currently face may follow a similar trajectory. A worrying number of angiosperm species are already threatened with extinction, and their demise will probably lead to the end of life as we know it.

It’s true angiosperms may blossom again from a stock of diverse survivors—and they may outlive us.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Avis Yang / Unsplash 

This Week’s Awesome Tech Stories From Around the Web (Through September 16)

0

ROBOTICS

This Driverless Car Company Is Using Chatbots to Make Its Vehicles Smarter
Will Douglas Heaven | MIT Technology Review
“Self-driving car startup Wayve can now interrogate its vehicles, asking them questions about their driving decisions—and getting answers back. …In a demo the company gave me this week, CEO Alex Kendall played footage taken from the camera on one of its Jaguar I-PACE vehicles, jumped to a random spot in the video, and started typing questions: ‘What’s the weather like?’ The weather is cloudy. ‘What hazards do you see?’ There is a school on the left. ‘Why did you stop?’ Because the traffic light is red.”

ENERGY

America Just Hit the Lithium Jackpot
Ross Andersen | The Atlantic
“About 16.4 million years ago, magma surged through a raised mound near Nevada’s present-day border with Oregon and began spreading an unholy orange glow outward over the region. …[A new paper published in Science Advances] claims that underneath the volcano’s extinct crater is a thick brown clay that is shot through with what could be the largest-known lithium deposit on the planet. If the discovery holds up, and the lithium is easy to extract and refine—both big ifs—this ancient geological event could end up shaping contemporary geopolitics, and maybe even the future of green energy.”

TRANSPORTATION

Tesla Reinvents Carmaking With Quiet Breakthrough
Norihiko Shirouzu | Reuters
Tesla is closing in on an innovation that would allow it to die cast nearly all the complex underbody of an EV in one piece, rather than about 400 parts in a conventional car, the people said. The know-how is core to Tesla’s ‘unboxed’ manufacturing strategy unveiled by [CEO Elon Musk] in March, a linchpin of his plan to churn out tens of millions of cheaper EVs in the coming decade, and still make a profit, the sources said.”

COMPUTING

Liquid Computer Made From DNA Comprises Billions of Circuits
David Nield | ScienceAlert
“[Despite] the passing of 30 years since the first prototype, most DNA computers have struggled to process more than a few tailored algorithms. A team [of] researchers from China has now come up with a DNA integrated circuit (DIC) that’s far more general purpose. Their liquid computer’s gates can form an astonishing 100 billion circuits, showing its versatility with each capable of running its own program.”

ARTIFICIAL INTELLIGENCE

How AI Agents Are Already Simulating Human Civilization
Ben Dickson | VentureBeat
These AI agents are capable of simulating the behavior of a human in their daily lives, from mundane tasks to complex decision-making processes. Moreover, when these agents are combined, they can emulate the more intricate social behaviors that emerge from the interactions of a large population. This work opens up many possibilities, particularly in simulating population dynamics, offering valuable insights into societal behaviors and interactions.”

TRANSPORTATION

This EV Smashed the World Record for Distance on a Single Charge
Jonathan M. Gitlin | Ars Technica
“The diminutive coupe…was built for efficiency, and in a six-day test at Munich airport, it set a new distance record on a single charge (for a non-solar EV): 1,599 miles (2,574 km), with less battery capacity than many plug-in hybrids—just 15.5 kWh. …Their eventual distance broke the existing record by 60 percent, achieving a scarcely believable 103.2 miles/kWh, or 0.6 kWh/100 km. For those who think in terms of miles per gallon, it’s the equivalent of traveling 3,815 miles on a single gallon of gas.

ART

Funky AI-Generated Spiraling Medieval Village Captivates Social Media
Benj Edwards | Ars Technica
On Sunday, a Reddit user named ‘Ugleh’ posted an AI-generated image of a spiral-shaped medieval village that rapidly gained attention on social media for its remarkable geometric qualities. Follow-up posts garnered even more praise, including a tweet with over 145,000 likes. …Reactions to the artwork online ranged from wonder and amazement to respect for developing something novel in generative AI art. …Perhaps most notably, Y-Combinator co-founder and frequent social media tech commentator Paul Graham wrote, ‘This was the point where AI-generated art passed the Turing Test for me.’i

3D PRINTING

Mighty Buildings Raises $52M to Build 3D-Printed Prefab Homes
Kyle Wiggers | TechCrunch
The new tranche, which sources familiar with the matter say values the startup at between $300 million and $350 million, brings Mighty Buildings’ total raised to $150 million. CEO Scott Gebicke says that it’ll be put toward Mighty Buildings’ expansion in North America and the Middle East, particularly Saudi Arabia, and supporting the launch of the company’s next-gen modular homebuilding kit.

LAW AND ETHICS

US Rejects AI Copyright for Famous State Fair-Winning Midjourney Art
Benj Edwards | Ars Technica
“The office is saying that because the work contains a non-negligible (‘more than a de minimis’) amount of content generated by AI, Allen must formally acknowledge that the AI-generated content is not his own creation when applying for registration. As established by Copyright Office precedent and judicial review, US copyright registration for a work requires human authorship.”

SPACE

Scientists Say You’re Looking for Alien Civilizations All Wrong
Ramin Skibba | Wired
“An influential group of researchers is making the case for new ways to search the skies for signs of alien societies. …The team of 22 scientists released a new report on August 30, contending that the field needs to make better use of new and underutilized tools, namely gigantic catalogs from telescope surveys and computer algorithms that can mine those catalogs to spot astrophysical oddities that might have gone unnoticed. Maybe an anomaly will point to an object or phenomenon that is artificial—that is, alien—in origin.”

Image Credit: Karsten Winegeart / Unsplash

Newly Discovered Spirals of Brain Activity May Help Explain Cognition

0

Recently I perched on the edge of a cliff at Ausable Chasm, staring at the whitewater over 100 feet below. Water rushed through sandstone cliffs before hitting a natural break and twirling back onto itself, forming multiple hypnotic swirls. Over millennia, these waters have carved the magnificent stone walls lining the chasm, supporting a vibrant ecosystem.

The brain may do the same for cognition.

We know that different brain regions constantly coordinate their activity patterns, resulting in waves that ripple across the brain. Different types of waves correspond to differing mental and cognitive states.

That’s one idea for how the brain organizes itself to support our thoughts, feelings, and emotions. But if the brain’s information processing dynamics are like waves, what happens when there’s turbulence?

In fact, the brain does experience the equivalent of neural “hurricanes.” They bump into one another, and when they do, the resulting computations correlate with cognition.

These findings come from a unique study in Nature Human Behavior that bridges neuroscience and fluid dynamics to unpack the inner workings of the human mind.

Multiple interacting spirals organize the flow of brain activity. Image Credit: Gong et al.

The team analyzed 100 brain scans collected from the Human Connectome Project using methods usually reserved for observing water flow patterns in physics. The unconventional marriage of fields paid off: they found a mysterious, spiraling wave activity pattern in the brain while at rest and during challenging mental tasks.

The brain spirals often grew from select regions that bridge adjacent local neural networks. Eventually, they propagate across the cortex—the wrinkly, outermost region of the brain.

Often called the “seat of intelligence,” the cortex is a multitasker. Dedicated regions process our senses. Others interweave new experiences with memories and emotions, and in turn, form the decisions that help us adapt to an ever-changing world.

For the cortex to properly function, communication between each region is key. In a series of tests, brain spirals seem to be the messenger, organizing local neural networks across the cortex into a coherent computing processor. They’re also dedicated to a particular cognitive task. For example, when someone was listening to a story—as compared to solving math problems—the vortices began in different brain regions and created their own spin patterns, a cognitive fingerprint of sorts.

By analyzing these spiral wave fingerprints, the team found they could classify different stages of cognitive processing using brain images alone.

Finding turbulence in the brain is another step towards understanding how our biological computer works and could inspire the creation of future brain-based machines.

“By unraveling the mysteries of brain activity and uncovering the mechanisms governing its coordination, we are moving closer to unlocking the full potential of understanding cognition and brain function,” said study author Dr. Pulin Gong at the University of Sydney.

Won’t You Be My Neighbor?

A fundamental mystery of the brain is how electrical sparkles in neurons translate into thoughts, reasoning, memories, and even consciousness.

To unravel it all, we need to go up the pyramid of neural processing.

Starting at the bottom: neurons. To be fair, they’re incredibly sophisticated mini-computers on their own. They’re also really nosy. They constantly chitchat with their neighbors using a variety of chemical signals, called neurotransmitters. You might have heard of some: dopamine, serotonin, and even hormones.

Meanwhile, neurons process local gossip—carried by electrical pulses—and change their behavior based on what they hear. Some relationships strengthen. Others break. In this way, the brain forms local neural networks to support functions like, for example, visual processing.

“Research into brain activity has made substantial progress in understanding local neural circuits,” the team wrote.

What’s missing is the bigger picture. Imagine zooming out from the local neighborhood to the entire world. Thanks to a boom in neurotechnology, scientists have been able to record from increasingly vast regions of the brain. Digging into all this new data, previous studies have found multiple local networks that contribute to different behaviors.

Yet much of these insights into brain organization have focused on neurons communicating in a linear pattern—like data zapping along undersea optical wires. To broaden our view, we also need to look for more complex 3D patterns—for example, spirals or vortices.

Roughly two years ago, the team tapped into a hefty resource in the hunt for brain activity turbulence: functional MRI data, covering the entire cortex, from the Human Connectome Project (HCP). Launched in 2009, the project has developed multiple tools to map the human brain at unprecedented scales and generated a massive database for researchers. The maps don’t just cover the structure of the brain—many have also documented brain activity as participants engaged in different cognitive tasks.

Here, the team selected brain images from a section of HCP data. This dataset imaged brain connectivity and function in 1,200 healthy younger adults from 22 to 35 years of age while they were at rest or challenged with multiple mental tasks.

They focused on brain images from 3 cohorts of 100 people each. One cohort was made up of people completely relaxing. Another was challenged with a language and math task. The final cohort flexed their working memory—that is, they were required to use the mental sketchpad we use to coordinate new information and decide what our next action should be.

With mathematical tools generally used to decrypt turbulent flows, the team analyzed MRI data for patterns that correspond to cognition—in this case, math, language, and working memory.

Put very simply, the analyses pinpointed “the eye of the storm” and predicted how fast and wide the neural swirls would spread out from there. They moved and interacted “with each other in an intriguing manner, which was very exciting,” said the team.

The spirals, like hurricanes, bounced across the cortex while rotating around set centers—called a “phase singularity.” The pattern is surprisingly similar to other dynamic systems in physics and biology, such as turbulence, they said.

A Spiraling Mystery

Why and how do these spirals occur? The team doesn’t yet have all the answers. But digging deeper, they found that the seeds of these spirals blossom out from boundaries between functional neural networks. The team thinks these twisting shapes could be essential for “effectively coordinating activity flow among these networks through their rotational motion.”

The spirals rotate and interact depending on the cognitive task at hand. They also tend to twirl and spread into brain regions dubbed “brain hubs,” such as the frontal parts of the brain or those related to integrating sensations.

But their interactions are especially enthralling. Based on the physics of turbulence, brain spirals that bump into each other carry a hefty amount of information. These waves capture data in space and time and propagate the information over the surface of living neurons in non-linear waves.

“The intricate interactions among multiple co-existing spirals could allow neural computations to be conducted in a distributed and parallel manner, leading to remarkable computational efficiency,” said Gong.

To Dr. Kentaroh Takagaki from Tokushima University, who was not involved in the study, “the results present a stark counterpoint to the established view” of information processing in the cortex.

For now, brain spirals remain rather mysterious. But with more work, they could yield insights into dementia, epilepsy, and other difficult neurological disorders.

Image Credit: Mitul Grover / Unsplash

Electric Vehicle Battery Recycling Gains Momentum With a Big New Closed-Loop System

0

Shifting to battery-powered vehicles is an essential step in tackling climate change, but it’s also creating worryingly large amounts of e-waste and demand for environmentally damaging mining. A new partnership to produce batteries made with recycled materials could help address the problem.

While there’s little question about the need to shift away from vehicles powered by fossil fuels, electrifying our entire transportation system isn’t going to be smooth sailing. Demand for lithium—the main ingredient in today’s leading batteries—has exceeded supply two years in a row, according to the International Energy Agency, despite a 180 percent increase in production since 2017.

There are similar concerns about shortages of other key ingredients like nickel, cobalt, and manganese, which could slow the much-needed transition to electric vehicles. These shortages are also incentivizing the rapid expansion of mining activities, which can be damaging to the environment, particularly if politicians turn a blind eye to lax standards in the rush to meet demand. That’s why there’s growing interest in recycling old batteries to retrieve the valuable metals contained within.

Now, a partnership between battery material producer BASF, graphene battery maker Nanotech Energy, battery recycler American Battery Technology Company (ABTC), and battery precursor material maker TODA Advanced Materials, claims it will be the first closed-loop battery recycling system in North America. The group hopes to be producing new batteries from recycled materials by 2024.

“By working together, our four companies can pool their expertise and drive better and more sustainable outcomes for the entire North American electric vehicle and consumer electronics industries,” Curtis Collar from Nanotech Energy said in a press release.

“This is a major milestone among the ongoing advances and growth of the lithium-ion battery market, and we are proud playing such a key role in the reduction of CO2 emissions along the battery value chain.”

Under the agreement, BASF will produce materials used in battery cathodes from recycled metals. Nanotech Energy will then use those materials to build their lithium-ion battery cells. Some of those recycled metals will come from ABTC recycling battery scrap produced by Nanotech Energy as it manufactures batteries. These will be processed into battery material precursors by TODA and then into cathode materials by BASF.

Together, this will create a circular battery recycling system, according to the companies. They claim that using recycled metals in the production of lithium-ion batteries can cut the amount of CO2 generated while manufacturing them by roughly 25 percent.

Battery recycling has been attracting growing interest from investors, particularly after the US passed the Inflation Reduction Act last year, which contains many incentives to reuse older batteries. Earlier this month, battery recycler Ascend Elements announced a $542 million funding round, and in August, competitor Redwood Materials revealed it had secured $1 billion in investments.

According to McKinsey, most of the battery materials suitable for recycling currently come from consumer electronics and battery scrap from manufacturers because few electric vehicles have yet reached the end of their operational lives.

But the analysts predict this could change soon, with more than 100 million vehicle batteries due to be retired within the next decade. They think that revenues from battery recycling could jump to more than $95 billion a year by 2040 globally.

With such a lucrative prize on offer, and growing concerns about supply shortages, it seems recycled battery materials could soon be playing a major role in the electric vehicle transition.

Image Credit: Markus Spiske / Unsplash