![Google DeepMind: Being More Human to AI Makes It Perform Better Google DeepMind: Being More Human to AI Makes It Perform Better](/var/ezflow_site/storage/images/iol-home/news-room/media-coverage/all-categories/o.r.-and-analytics-in-the-news/google-deepmind-being-more-human-to-ai-makes-it-perform-better/4553292-1-eng-US/Google-DeepMind-Being-More-Human-to-AI-Makes-It-Perform-Better_newslargethumb.jpg)
Google DeepMind: Being More Human to AI Makes It Perform Better
Google DeepMind researchers discovered that using prompts similar to human interaction greatly improved math skills in large language models
Google DeepMind researchers discovered that using prompts similar to human interaction greatly improved math skills in large language models
Deceptive videos and images created using generative AI could sway elections, crash stock markets and ruin reputations. Researchers are developing methods to limit their harm.
I induce the estimable Professor Tinglong Dai of Johns Hopkins University, an acclaimed AI specialist in the field of supply chain, to laugh aloud a few moments into our conversation.
The AI Incident Database chronicles over 2,000 incidents of AI causing harm. It’s a gulp-worthy number that ominously continues to grow. But the devil is in the details and a mere count does not provide sufficient detail in degrees of harm or malevolent intent. Pretending AI is safe is sheer folly but imagining it the bringer of doom is equally foolish. To get a more realistic read on the damage AI has caused and is likely to cause, here’s a hard look at reported incidents in the real world.
AI "can potentially confuse or mislead viewers if they're not aware content was generated or edited with AI," TikTok said
It’s time to get real about artificial intelligence applied to our drug development and manufacturing outsourcing milieu – practical supply-chain enhancements from AI-generated insights that can be implemented right now.
AI companies attended the AI Insight Forum in Washington, DC.
Transparency around data collection and risk assessments should be mandated by law, especially in high-risk applications of AI.
If you’re a bit anxious that artificial intelligence might be coming for your job, you’re not alone: 22% of workers in the U.S. worry that technology will make their jobs obsolete, according to a Gallup poll out on Monday.
In May 2023, the World Health Organization (WHO) declared an end to the COVID-19 pandemic (1). Despite that announcement, fallout from the pandemic continues to reverberate through global supply chains, exposing their opacity and fragility and catalyzing their transformation. Geopolitical issues, such as Russia’s invasion of Ukraine and rising tensions between the United States and China, have shaped supply-chain transformation, resulting in what I call a “supply-chain iron curtain” that is poised to complicate international trade (2).
Ashley Smith
Public Affairs Coordinator
INFORMS
Catonsville, MD
[email protected]
443-757-3578