AInsights: Executive-level insights on the latest in generative AI….
Meta leading effort to label AI-generated content
One of the greatest threats building on profitable disinformation and misinformation campaigns is the rise of deepfakes.
In recent news, a finance worker was tricked into paying out $25 million after a video call with his company’s CFO and coworkers turned out to be fake. The participants were digitally recreated using publicly available footage of each individual.
Explicit deepfake images of Taylor Swift widely spread across social media in January. One image shared on X was viewed 47 million times.
AI’s use in election campaigns, especially those by opposing forces, is already deceiving voters and threatens to wreak havoc on democracies everywhere.
Policy and counter-technology must keep up.
On February 8th, the Federal Communications Commission outlawed robocalls that use voices generated by conversational artificial intelligence.
It’s easy to see why. Just take a look at the capability of tools that are meant to actually help business introduce AI-powered conversational engagement to humanize mundane processes such as Bland.ai. With a little training, AI tools such as Heygen could easily deceive people, and when put into the wrong hands, the consequences will be dire.
At the World Economic Forum in Davos, Nick Clegg, president of global affairs at Meta, said the company would lead technological standards to recognize AI markers in photos, videos and audio. Meta hopes that this will serve as a rallying cry for companies to adopt standards for detecting and signaling that content is fake.
The collaboration among industry partners and the development of common standards for identifying AI-generated content demonstrate a collective, meaningful effort to address the challenges posed by the increasing use of AI in creating misleading and potentially harmful content.
Standards will only help social media companies identify and label AI-generated content, which will aid in the fight against misinformation/disinformation and protect identities and reputations of people against deepfakes. At the same time,
The introduction of AI markers and labeling standards is a significant step towards enhancing transparency, combating misinformation, and empowering users to make more informed choices about the content they encounter on digital platforms.
New services offer “human vibes” to people who use generative AI to do their work instead of using genAI to augment their potential
This interesting Forbes article asks, “did you use ChatGPT on your school applications?”
Turns out, that using generative AI to do, ironically, the personal work of conveying why someone may be the best fit for a higher education institution, is overwhelming admissions systems everywhere.
To help, schools are increasingly turning to software to detect AI-generated writing. But, accuracy is an issue, leaving admissions offices, professors and teachers, editors, managers, and reviewers everywhere cautious of enforcing potential AI detection.
“It’s really an issue of, we don’t want to say you cheated when you didn’t cheat,” Emily Isaacs, director of Montclair State University’s Office for Faculty Excellence, shared with Inside Higher Ed.
Admissions committees are doing their best to train for patterns that may serve as a telltale sign that AI, and not human creativity, were used to write the application. They’re paying attention to colorful words, flowery phrases, and stale syntax according to Forbes.
For example, these experts are reporting that the following words have spiked in usage in the last year, “Tapestry,” “Beacon,” “Comprehensive curriculum,” “Esteemed faculty,” and “Vibrant academic community.”
To counter detection, in a way that almost seems counter intuitive, students are turning to a new type of editor to “humanize” AI output and help eliminate detectability.
“Tapestry” in particular is a major red flag in this year’s pool, several essay consultants on the platform Fiverr told Forbes.
This is all very interesting in that admissions offices are also deploying AI to automate the application review process and boost productivity among the workforce.
Sixty percent of admissions professionals said they currently use AI to review personal essays as well. Fifty percent also employ some form of AI chat bot to conduct preliminary interviews with applicants.
I originally wasn’t going to dive into this one, but then I realized, this isn’t just about students. It’s already affecting workforce output and will only do so at speed and scale. I already see genAI overused in thought leadership among some of my peers. Amazon too is getting flooded with new books written by AI.
The equivalent of the word “tapestry” is recognizable everywhere, especially when you compare the output to previous works. But like admissions committees there is no clear solution.
And it makes me wonder, do we really need a platform that calls people out for using or over-using AI in their work? What is the spectrum of acceptable usage?What we do need is AI literacy for everyone, students, educators, policy makers, managers, to ensure that the human element, learning, expertise, potential, is front-and-center and nurtured as genAI becomes more and more pervasive.
AI doctors in a box are coming directly to people to make healthcare more convenient and approachable
Adrian Aoun is the cofounder of San Francisco-based health tech startup Forward, a primary care membership with gorgeous “doctor’s” offices that take make your healthcare proactive with 24/7 access, biometric monitoring, genetic testing, and a personalized plan for care.
Now Aoun announced $100 million in funding to introduce new 8×8 foot “CarePods” that deliver healthcare in a box in convenient locations such as malls and office parks.
The CarePod is designed to perform various medical tasks, such as body scans, measuring blood pressure, blood work, and conducting check-ups, without the need for a human healthcare worker on site. Instead, CarePods send the data to Forward’s doctors or real-time or follow-up consultations.
AI-powered CarePods will make medical visits faster, more cost-effective, and I bet more approachable. There are skeptics though, and I get it.
Arthur Caplan, a professor of bioethics at New York University told Forbes, “The solution then isn’t to go to jukebox medicine.” The use of the word “jukebox” is an indicator. It tells me that things should work based on existing frameworks.
“Very few people are going to show up at primary care and say, ‘My sex life is crummy, I’m drinking too much, and my marriage is falling apart,” Caplan explained.
But my research over the years communicates the opposite, especially among Generation-Connected. It is easier for men, for example, to speak more openly about emotional challenges to AI-powered robots. I’m not saying it’s better. I’ve observed time and time again, that the rapid adoption of technology in our personal lives is turning us into digital narcissists and digital introverts. Digital-first consumers want things faster, more personalized, more convenient, more experiential. They take to technology first.
“AI is an amazing tool, and I think that it could seriously help a lot of people by removing the barriers of availability, cost, and pride from therapy,” Dan, a 37-year-old EMT from New Jersey, told Motherboard.
CarePods aim to remove the impersonal, sanitized, beige, complex, expensive, clip-board led healthcare experiences that many doctors’ offices provide today. If technology like this makes people take action toward improving their health, they let’s find ways to validate and empower it. We’ll most likely find, that doing so will make healthcare proactive versus reactive.
Please subscribe to my newsletter, a Quantum of Solis.
Brian Solis | Author, Keynote Speaker, Futurist
Brian Solis is world-renowned digital analyst, anthropologist and futurist. He is also a sought-after keynote speaker and an 8x best-selling author. In his new book, Lifescale: How to live a more creative, productive and happy life, Brian tackles the struggles of living in a world rife with constant digital distractions. His previous books, X: The Experience When Business Meets Design and What’s the Future of Business explore the future of customer and user experience design and modernizing customer engagement in the four moments of truth.
Invite him to speak at your next event or bring him in to your organization to inspire colleagues, executives and boards of directors.