Studying the impact of innovation on business and society

The Content Moderation Imperative: How AI and Humans Will Make Social Media and Gaming Better for the World

I had the privilege of participating in an important conversation about the state and future of social media and gaming and the need to protect users from harmful content and engagement. Joining me in that conversation was Chris Priebe, CEO & founder of Two Hat Security and Stewart Rogers of VentureBeat. You can listen to it (and I encourage you to do so) here.

Leading up to the discussion, I spoke with VentureBeat on the subject and the critical need for intelligent content moderation. The article summarized the conversation nicely. I also wanted to share the unabridged conversation with you because it dives deeper into the core issues, platform and advertiser opportunities, user benefits and it’s much more candid.

Make content moderation good for business — and change the world

Q: We’ve become increasingly aware that user generated content on social platforms is open to all kinds of abuse. How do companies balance the benefits of engaging users on social platforms or in communities, with the risks decide where the boundaries are and keep as many users as possible engaged without over-stepping?

Brian: This whole idea is new. We’re still navigating what this looks like and what the best management practices are. We need to use the actions we’re seeing, like the violence live streamed on Facebook, cyberbullying, and toxic behavior, trolling and take steps to prevent those from happening and protect online communities from interacting with that kind of behavior. At the same time, we have to, as an industry, agree on what’s ok and what is not ok, clearly, and stand united.

Human interactions are complex and nuanced, so this isn’t an easy task. Generations are functioning on different standards. Stakeholder and shareholder pressure to monetize at all costs is pervasive. At the same time, there’s a huge disconnect between experts, parents, teachers and society as its evolving. This is creating new norms and behaviors that bring out the best and the worst in us on each platform. The challenge is that not all groups see things the same way or agree to or event see what’s dangerous or toxic at all. For some, they’re moving so fast between accelerating incidents and catastrophic events that it’s impossible to be empathetic for more than a minute because something else is around the corner. And leading platforms are either not reacting until something happens or they’re paying it lip service to the issues or they’re facilitating dangerous activity because it’s good for business.

Hate speech, abuse and violence should not be accepted as the price we pay to be on the internet. We’re bringing humanity back into the conversation. It’s why we’re here.

Q: In concrete terms, what are the risks that companies expose themselves to and the potential costs?

Brian: In a perfect world, this wouldn’t even be a discussion. Though it is a nuanced concept and constantly evolving, this kind of technology and practice is good for business. Brands and users shouldn’t participate or associate with platforms that have hate speech, abuse and illegal content. Yet, they do. With the very recent Facebook hearing and live streaming of violence, platforms are at an inflection point on how to manage their communities. What they do from here on out and also what they don’t do speaks volumes.

For example, Tumblr was removed from the App stores because there was child sexual abuse material (CSAM) on the platform. Last year they took steps to remove all pornography content and were then allowed back on the App stores. Being removed from the App store is a big hit and should be a huge deterrent. What does it say about our world when Apple is one of the only police forces protecting people online?

Companies should be proactive to implement this kind of protection in order to usurp toxic traffic with better quality users. That’s better for the online communities and it’s for long-term engagement.

Q: How do they best implement these boundaries? What is the best approach to content moderation?

Brian: Before implementing this kind of practice in your platform, you really need to identify and understand what kind of community you want to foster. Each platform has very unique behaviors and personality. For example, do you want a community like Medium; a positive and inviting community that embraces personal expression.
Or would you want to be an 8chan; a toxic environment. It’s not an easy question to ask yourself, but it’s necessary.
I think once you can establish your vision for the community you plan to build, you can then set very clear boundaries on what’s acceptable and what is not.

We know that technology and AI can only shoulder so much of the responsibility. A mix of both AI and human intelligence (HI) working together to be proactive in avoiding and removing unacceptable content is the best practice.

Q: What is the role of AI in meeting the challenges – and the corresponding role of human intelligence?

Brian: AI cannot do it all. It can do as much or as little to solve this problem. It comes down to the mandate and intended outcomes. I think it would put us at a disadvantage to believe that technology itself can fix this problem. This is a problem of human interaction and behavior, James Bond level villainy and intentionally unethical intent, an incredible absence of consequences, and emboldened behaviors as a result. So we need humans, AI and more to help fix it.

It’s a symbiotic relationship between AI and HI. Chris likes to describe AI as a toddler; it can identify things and inform you. It’s very good at that. AI is also much faster at processing and identifying sensitive material. But it’s still learning context and nuance.

That’s where humans come in. Humans can take the information from AI to make the tough calls, but in a more efficient way. We can let AI bear the responsibility of processing a lot of toxic material, then let humans come in when needed to make the final decision.

Q: Does our approach to content moderation need an overhaul? What does raising the bar mean to you?

Brian: We’re at a very interesting point in time. The internet creates insurmountable opportunity, mostly for good, but not always. As a society, we need to ask ourselves the tough questions. Are we really going to consent that this kind of behavior – CSAM (child pornography), extremism, livestreamed violence, cyberbullying, exploitation, psychological abuse and civil warfare, intentional misinformation campaigns. – is the cost of being online? It’s unacceptable. It’s unacceptable that we even have to have this conversation. I think the majority of people would agree. Users need to demand healthy online behavior. That’s the new cost of admission. Without the users, social platforms can’t exist.

It’s a human problem. Humans are using technology for evil, certainly. But we can also use technology as a solution. This kind of content moderation doesn’t hinder the ability to express, it protects our expression. It allows us to continue to post online, but with some reassurance that we’re in a welcoming environment.

Raising the bar means raising our standards. Demanding online communities foster healthy environments, protecting its users from toxic behavior and being unapologetic in doing so. And also raising our own standards as users. We need to understand the motivation behind our own behavior, and ask, “Why am I sharing this content? Why am I making this comment? Does this contribute to the greater good? Would I say this in real life? Why can’t I see that I am an asshole, a villain or as an active part of the problem.

We – as users and a society – need to demand a change from platforms – we have the power, we’re the ones using the platforms and feeding their ad revenue, so we have the power to influence their policies.

Q: And how will companies benefit – in concrete terms – from raising the bar?

Brian: This is good for business and it’s also what’s right. Platforms won’t have to fear losing advertisers or users because of harmful content and behavior. Brands and advertisers will look to platforms that have positive, lasting engagement.

We’re also at a point where doing the right thing is also good for business. Whether it’s CSR or #MutingRKelly, platforms are keenly aware of the reputation they build. Implementing this kind of technology and building a healthy community by doing so will help brands reputations, goals and bottom-lines.

Q: Forecasting ahead, where do you think we’ll be 1-2 years from now on this issue?

Brian: This conversation will and will not be very different if we do nothing about it now. We won’t be asking where we went wrong or what we could have done better. We’ll know. The technology is available, you can either use it or lose it. In the future, we’ll see that platforms are going to greater lengths to ensure healthy communities and protecting their users. We won’t be weighing the options of “should we use this technology” or “what is the cost/benefit analysis of adopting this practice?”

We’ll be very intentional about what platforms we use, what we post and why. All of us will have access to the best technology and established best practices for ourselves, our customers, our family, our friends and our society.

Please listen if you care about fixing and protecting online experiences: Raising the bar on content moderation for social & gaming platforms

Brian Solis

Brian Solis is principal analyst and futurist at Altimeter, the digital analyst group at Prophet, Brian is a world renowned keynote speaker and 8x best-selling author. In his new book, Lifescale: How to live a more creative, productive and happy life, Brian tackles the struggles of living in a world rife with constant digital distractions. His model for “Lifescaling” helps readers overcome the unforeseen consequences of living a digital life to break away from diversions, focus on what’s important, spark newfound creativity and unlock new possibilities. His previous book, X: The Experience When Business Meets Design, explores the future of brand and customer engagement through experience design.

Please, invite him to speak at your event or bring him in to inspire colleagues and fellow executives/boards.

Follow Brian Solis!

Twitter: @briansolis
Facebook: TheBrianSolis
LinkedIn: BrianSolis
Instagram: BrianSolis
Youtube: BrianSolisTV
Newsletter: Please Subscribe
Speaking Inquiries: Contact Him Directly Here 

____________________________

Follow Lifescale!

Main Newsletter: Please Subscribe
Coaches Newsletter: Please Subscribe
Twitter: @LifescaleU
Instagram: @LifescaleU
Facebook: Lifescale University

Leave a Reply

Your email address will not be published. Required fields are marked *