Skip to main content

Posts

Showing posts from October, 2022

How tech is helping us talk to animals

For centuries, we didn’t even know those sounds existed. But as technology has advanced, so has our capacity to listen. Today, tools like drones, digital recorders, and artificial intelligence are helping us listen to the sounds of nature in unprecedented ways, transforming the world of scientific research and raising a tantalizing prospect: Someday soon, computers might allow us to talk to animals. While that seems to be changing with our increased understanding of animals, Bakker cautions that the ability to communicate with animals stands to be either a blessing or a curse, and we must think carefully about how we will use our technological advancements to interact with the natural world. We can use our understanding of our world’s sonic richness to gain a sense of kinship with nature and even potentially heal some of the damage we have wrought, but we also run the risk of using our newfound powers to assert our domination over animals and plants. Indigenous communities around the w...

Machine Learning Shaking Up Hard Sciences, Too

  “I felt very threatened by machine learning,” says   Jesse Thaler , a theoretical particle physicist at the Massachusetts Institute of Technology. Initially, he says he felt like it jeopardized his human expertise classifying particle jets. But Thaler has since come to embrace it, applying machine learning to a variety of problems across particle physics. “Machine learning is a collaborator,” he says. Over the past decade, in tandem with the broader deep-learning revolution , particle physicists have trained algorithms to solve previously intractable problems and tackle completely new challenges. https://spectrum.ieee.org/machine-learning-in-physics

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality

Even the best AI is not perfect, and when things go wrong...we may face a ‘responsibility gap’, a situation in which no one is responsible for the harm caused by AI.  Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility. http://blog.practicalethics.ox.ac.uk/2022/09/are-we-heading-towards-a-post-responsibility-era-artificial-intelligence-and-the-future-of-morality/ 

Children from different socioeconomic backgrounds make different decisions when placed in the same risky situation, research finds.

  “I hope this study—as well as other future studies by our lab and other people—will change perspectives,” Blake says. The research provides evidence that risky decisions in childhood do not always reflect poor judgment or a lack of self-control, he says.   https://www.futurity.org/risk-taking-kids-socioeconomic-backgrounds-2808542-2/