What is AI?
Since I have a thing for media literacy and I want you girls to be informed, we’re gonna explore AI: artificial intelligence. AI is an umbrella term for anything where a computer thinks and behaves like a human to solve problems and perform tasks. The way AI works is, large amounts of data are collected and input into the AI software. Then when given a task, AI processes that data using pre-programed algorithms to analyze it and recognize patterns, then the AI produces results for the task.
AI is used in a variety of technologies, like self-driving cars, virtual customer service, interactive video games, and language translation. With just a few keywords, AI can also generate paragraphs of information and even create images, video, and music.
In a lot of applications, AI can be very helpful – but sometimes it’s too helpful. For example, if an AI software became more effective at doing a job than a person did it, a company might replace the person with AI, leaving them without a job.
That’s just one of many concerns about AI. As it relates to you, teens and tweens, I’ll share my biggest AI concerns and what you can do about them.
AI + Social Media
Just recently, Meta, the parent company of Instagram and Facebook, shared details about how their algorithms work using AI: “Each AI system has models that enable it to make multiple predictions about content that people find most relevant and valuable […] that people are most likely to engage with.” While social media algorithms themselves aren’t new, their latest advancements in AI technology are creating an even-more customized feed to keep you engaged on the app. Because social medias know that the longer you stay on their app and see ads/sponsored posts, the more money they make from the companies doing the ads that you see.
Eric Schmidt, a former CEO of Google, and Jonathan Haidt, a social psychologist, referred to this as the widespread, skillful manipulation of people by AI-super-influencers. Schmidt and Haidt warn, “We can expect many [social medias] to become far more addictive as AI becomes rapidly more capable. Much of the content served up to children may soon be generated by AI to be more engaging than anything humans could create.” On top of that, Snapchat and other social medias are launching dedicated chatbots on their apps for users to interact with. Snapchat described theirs could answer trivia questions or give you gift ideas for your BFF’s birthday. But Snapchat knows its AI chatbot is flawed and could generate, “biased, incorrect, harmful, or misleading content.” And yet they’re still moving forward with it.
So what can you do about Social Media AI? Since they’re tracking every single move you make, you need to make your moves count.
- Be mindful (instead of mindless) of what you do on apps – it will think you want more of whatever you engage with
- Don’t watch content you don’t like – keep scrolling
- Tap “Not Interested”/ “Hide Ad”/ “Irrelevent”
- Flag/report inappropriate content
- Retrain the algorithm by watching/liking/commenting/following/favoriting content you DO like
AI + Fake Content
AI can generate text using keywords. Now, not all AI text content is bad. Some college students have even used AI to write entire essays. I do not agree with this: 1 – it’s not doing your own work, it’s cheating, 2 – it’s likely plagiarism, and 3 – it’s likely inaccurate. The first one is self-explanatory, plus there are tools that allow teachers to check if students’ cheated using AI. The plagiarism issue is because–when AI processes all the data in its database for the entered keywords, then pieces together text from various sources, and then generates several paragraphs for the result, there will likely be lines of text the AI copied word for word – and that is plagiarism, which is wrong. In addition, the result is also likely inaccurate because, again, the text was pieced together using bits of data that when combined, may not be factual. Instead of authentic thoughtful work, the result is fake text content created using AI. Unfortunately it’s not just school papers we need to worry about. AI is definitely increasing the online spread of MISinformation – false/inaccutate info shared – and DISinformation – false info intentionally shared to mislead people. This is a problem because a lot of folks believe everything they read on the internet. When people don’t have complete and accurate info, they can’t make informed decisions. Sadly the creators of the disinformation are hoping for this.
But there are ways you can spot AI text, at least until AI gets smarter. Look for:
- Short sentences – AI hasn’t mastered complex sentences (yet)
- Repeated words/phrases
- Facts, no analysis/critical thinking
- Inaccurate facts
- Verify text came from credible source/author, lateral read (same info multiple credible sources)
- Use plagiarism checker tools
- Use AI detector tools
- Flag/report inappropriate/inaccurate, whether created by AI or humans
In addition to text, AI can generate visual media using keywords. A digital design platform I use has an AI image generator: you enter keywords, it creates an image of that, like “a panda surfing a wave” (which is made up, pandas can’t surf–yet). The result shows pictures of just that: a surfing panda. It’s not a real picture, but it sure looks like it. Which is what concerns me. Again, AI can piece together various snippets of visual media and create its own images and video that look and even sound real. Earlier this year a photographer won an international photo competition with an AI generated image – that’s how real it looked. There’s also a plagiarism issue with images – AI takes bits and pieces from copyrighted images without artists’ permission and generates an image resembling the artists’ work. Lately we’re seeing more and more AI images/videos shared online of events and disasters or people saying/doing things that didn’t actually happen but they look real so people believe they are real. These are called deepfakes, named from the AI it uses, called deep learning, to make images/video of fake events: deep-fake. Again, this is a problem because it spreads mis/disinformation. No longer can we believe everything we see.
So how do we know what to trust? There are telltale signs you can pick up on, again at least until AI gets smarter.
- Zoom in and slowly look closely, see what’s off
- Body proportions are off, tiny ears/long fingers
- Teeth look different than real life
- Hands: extra fingers, no nails (there are people with limb differences – which is fine – but if the person in the photo doesn’t have limb differences in real life but do in the picture, it’s likely AI)
- Glitches in clothes, accessories
- Glow around person/subject, edited in
- Too smooth/perfect
- Videos lack facial expressions or natural body movements
- Strange shadows/light
- Background distortions or brush strokes
- Text/logos wrong
- Use reverse image search to find original source
- Use AI Art Detector
- Flag/report inappropriate/inaccurate, whether created by AI or humans
AI + You
My final concern (at least for this episode) has to do with you specifically. Just last month the FBI (here in the US) warned of “malicious actors” using AI to create deepfake images and videos to target victims, including children. These photos are altered into inappropriate content and then shared online to harass the victim or get money from them. The original photos and videos often come from the victim’s social media content (or their parents’). Once the AI photos/videos are shared online, it’s very difficult to get them removed.
Here are some of the FBI’s many recommendations for what you can do when sharing content or connecting with people online:
- Be cautious when posting images/video
- Do not share personal info – name, birthday, address, city, school, teams
- Make your profile private
- Don’t Friend/Follow or DM people you don’t know, verify requests aren’t posing as your friend/hackers
- Be wary of people who immediately ask for something or pressure you
- Do not send money or sensitive info/photos/video
- Use complex passwords and security measures, like 2FA: two-factor authentication
I’ll add to this list:
- Talk to a trusted adult about any interactions that don’t feel right
- Report any suspicious user or content to the app and, if necessary, to police
One question I have is, are social medias doing anything about fake content? Some companies have policies about labeling AI media and prohibiting misinformation or inappropriate content, but are these policies actually being enforced? I’d honestly like to know. Because I’d prefer they take stronger measures to PREVENT that kind of content BEFORE its shared, instead of waiting to get involved until AFTER the initial damage is done.
AI technology isn’t inherently bad. How it’s being used/misused is what’s concerning. Social medias and AI companies should be less concerned about making money and more concerned about their impact on people, especially children. But until they figure out how to get a handle on AI, at least you now have some media literacy tools to help you be more aware of AI around you.
Be Aware of AI Poster Printable
To help you with this, I created a “Be Aware of AI” poster for you to print out, personalize, and post on your wall where you’ll see it, remember to write in it, practice it, and believe in it — that’s the important part.
If you have a topic suggestion, I’d love to hear from you! Send an email (tweens get the OK from your parents) to [email protected] .
If you have social media already, follow me on Insta or tiktok @empowerfulgirls. I’m not encouraging or endorsing social media, but I’m on there to offer an unfiltered, uplifting alternative to what’s in your feed. Remember to get on the email list for the newsletter!
Also, if you enjoy listening to 10 for Teens + Tweens, I would truly appreciate you telling your friends about this podcast or leaving a review so others can find it and feel uplifted, too! Your support means the world to me!