How to stay safe in the era of AI

AI is a part of life for many people, but it’s also something scammers are using. How do you stay safe in a world where AI is present?

Whether you like it or not, AI is a part of life for many Australians, and for people all over the world. It’s on the internet and seemingly every service, and even though you can turn it off, chances are you’re using a little bit of AI in some form or another.

But what happens when AI is used to harm you, or to steal from you? Can AI even be used in that way? And what if it didn’t intend to, but somehow AI made it happen?

Living in the AI era isn’t clearly going to be easy, so here are some tips to help stay safe all throughout.

Tread carefully with tools promising productivity hacks

AI is in everything these days, and it seems almost impossible to escape. It’s in your phone and your computer in the AI PC and in your watch and oven and car, and it just seems to be everywhere almost all at once.

It promises a lot, and while it clearly has environmental issues and ethical issues aplenty, it does appear to help deal with large quantities of data quickly. Sprinklings of stories where AI is used for the purposes of good do seem to appear, and while they are few and far between, it is nice to see that AI can do something positive, such as improving screenings of breast cancer and helping healthcare workers to monitor other conditions.

But what about your life directly, and how you conduct yourself and plan your day?

AI appears to be making a dent on that with productivity “hacks”, AI-assisted tools that have been popping up in the past year poised to do just that: leverage the power of AI for your own purposes, using something called “agentic AI”.

While it sounds like a fancy new bit of jargon, agentic AI simply means to use AI services as agents for something else. It could be using several automated search systems in “agentic search”, or it could be having several systems find products and buy them for you in “agentic shopping”.

Agentic anything seems to be the buzzword variety for 2026, and it’s one you can expect a lot of, because agentic services are apparently good for productivity, too. It’s like having multiple agents working on behalf of you, or that’s the theory, anyway.

User
AI Agent
Plan
Tool
Reason
Result

Imagine several computer applications managing your calendar, checking your emails, looking at anything you need to do like checking into a flight and then doing it, and then letting you know on WhatsApp or Discord, or just about any other service. They keep doing things, acting on your behalf until they run out of actions, and then they stop.

That’s an agentic platform and it could be one working for you, but it could also come with its own problems.

One of the most popular in recent weeks, OpenClaw (formerly Moltbot and ClawdBot) has already seen numerous security problems arise, and could end up seeing your data and access in a position where scammers and criminals could just access it from under you.

Simply put, just because something is AI doesn’t mean it’s advanced, nor does it mean it has been developed with security in mind. It may well have something that can help you, but that doesn’t mean you should necessarily give it carte blanche for your digital life.

By contrast, you may want to approach with hesitation, and just be that little bit cautious.

Don’t believe in everything you see, read, or hear

Being cautious is good advice for lots of things, and it could lead to you not getting caught out by a variety of things, fake images that have been AI generated being chief amongst them.

In the AI era, it’s easier than ever to make anything appear as if it’s come out of nothing. That includes pictures of food, pictures of people, screenshots from phones… really anything, even news found through search.

That means your eyes may not be easily trusted, and you may need to go with your gut. If someone sends you a picture out of the blue or a screenshot purporting something, you should probably question it, because that’s the world we live in now it seems.

It’s not just your eyes, either.

Your ears can be tricked with AI thanks to voice clones, with voice samples able to quickly generate recreations of someone using as little as five seconds of audio, while video clones can use some of that technology alongside a video model to recreate movements of a person by making a similar double.

A digital clone (sometimes called a “digital twin”) won’t be anywhere near as real as the obviously real individual, but in an online glance without thinking too much about it, you might find yourself tricked, and that’s a problem.

Sound waves

The risk of oversharing

Scammers are always looking for ways to turn your information into legitimate money, whether that’s from having you believe a fake call, a fake email, or even just following up on details found in a data breach.

That alone is a source of frustration, with your own details leaked through a means out of your control giving criminals a way to use data you shared privately and likely for good reason against you.

But what happens when information you’re sharing for fun becomes used in the same way? Can that even happen?

If you’re someone who uses AI services, there’s a risk that oversharing with an AI could see that data used to train the platform, or even leaked later on.

While not every service follows the same rules, and some will actively protect against platform training, the early days of AI only a couple of years ago were littered with complaints finding information was used for training the models, meaning sharing and oversharing information could have found its way into other conversations. That’s a risk, but it’s not alone.

As we all know from data breaches of services outside of AI, information can get leaked and stolen all too easily. While AI services may store it differently, that doesn’t make it any less risky.

Simply put, be careful what you share with an AI service. Much like the risk of an AI tool exposing your data, there’s also the issue that a service could inadvertently do the same.

It’s not the only risk, either.

In recent years, scammers have been taking advantage of people sharing voice and video on social, recording those samples, finding family members, and turning what might be seen as “oversharing” into criminal schemes.

One of the more consistent styles that has been growing in number is to copy a voice and claim the person has been kidnapped, demanding family members pay up.

Aided by technology, scammers can map the voice either to a keyboard and speak through it, or even replace their own voice in a speech-to-speech system, pushing demands through the phone and extorting people in the process.

They don’t need much more than a few seconds of voice to pull it off, an approach known as a zero-shot voice clone. From there, a criminal can use a voice almost as their own, and look for ways to tug on heart strings by tricking someone with the urgency of a real-life plea. People won’t think, and will simply act, giving the criminal what they want.

Meanwhile, video clones can also take advantage of others by pretending and purporting to be someone else, often advertising something that doesn’t exist.

You’ve probably seen a fake celebrity or fake government official, and these are just becoming the norm. While fake people were once the domain of mere text and images on social, doctored videos using deep fake video and voice cloning technology now make it more difficult to trust what you see and hear, and oversharing can lead to generating fake versions of less famous people, used for tricking people out of real money.

Create a pass phrase for your friends and family

With digital clones growing in number as AI voice cloning takes off, you might be wondering just what you can do. Already, the scam has seen losses mount in the tens of millions, doing just that last year, and will likely only rise.

Going on the defensive is also difficult because we live in a generation where sharing is normal. Video sharing naturally includes audio, and that means audio of voices that can be easily extracted and copied for cloning.

However, you can limit oversharing by not talking or linking to friends and family, and by keeping official identification cards, addresses, boarding passes, and other materials that might be used to not only identify you, but also find their way into deciphering who you know.

The same is true with your kids and loved ones, and doing your best to not keep their voices and videos online. If you have enough voice of your loved ones online, you can only imagine how criminals might be willing to twist things to tug on your heart strings.

If it seems like there’s nothing you can do to thwart a scammer’s AI voice clone attempt, though, you may want to think about how you can legitimise yourself versus the digital twin.

To do that, you need a code. Something that’s short and sweet and impossible to guess, and that you can share with your family in case the worst happens.

What you need is an emergency passphrase of sorts, something that is specific to a person, and that if someone were to contact in an emergency and frantic, they’d be able to cite the phrase, or at least some of the words of the phrase.

To help, we’ve come up with a generator you can use to build your own that can let you share the phrase with your friends and family (below).

An emergency pass phrase is a little like a safety phrase for kids at pickup, giving both kids and parents some peace of mind to know that their kids won’t be going home with a stranger.

This is very much like that, and something you and your family can remember entirely, or even simply one word of. There are over 400,000 words in the English language, so it’s highly unlikely a scammer would be able to know just one of the words in your passphrase, and definitely not all three.

In the heat of a scam call attempting to pull on those heart strings with an AI cloned voice, repeating and reiterating that you need just one word of an emergency pass phrase would almost always see the question avoided and the urgency pushed.

But if you want to make sure a scammer gives up, it’s one approach that could work in this AI era, and for a few basic reasons: it’s manual, disconnected from personal information, and when it’s made by someone else (like with our little generator), impossible to guess.

That’s a potential win for your online safety, and something that could just help.