X
CNET logo Why You Can Trust CNET

Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

We Asked Chatbots About Home Security: Here's Why You Can't Trust Them

Bots like ChatGPT are giving false and confusing answers to important home security questions -- here's why that matters.

Headshot of Tyler Lacoma
Headshot of Tyler Lacoma
Tyler Lacoma Editor / Home Security
For more than 10 years Tyler has used his experience in smart home tech to craft how-to guides, explainers, and recommendations for technology of all kinds. From using his home in beautiful Bend, OR as a testing zone for the latest security products to digging into the nuts and bolts of the best data privacy guidelines, Tyler has experience in all aspects of protecting your home and belongings. With a BA in Writing from George Fox and certification in Technical Writing from Oregon State University, he's ready to get you the details you need to make the best decisions for your home. On off hours, you can find Tyler exploring the Cascade trails, finding the latest brew in town with some friends, or trying a new recipe in the kitchen!
Expertise Smart home | Smart security | Home tech | Energy savings | A/V
Tyler Lacoma
5 min read
Illustration of a man in blue using a phone with a holographic chatbot icon asking how it can help.

Chatbots have come a long way, but you can't trust them with important security questions.

Getty Images

I’ve been a proponent of useful AI in home security, where it’s holding conversations for us, identifying packages, learning to recognize important objects and searching our video histories to answer questions. But that doesn’t mean you should pop open ChatGPT and start asking it security questions.

Generative and conversational AI tools have their uses, but it’s a bad idea to ask any chatbot about your safety, home security, or threats to your house. We tried -- and it’s unnerving how much they get wrong or can’t help with.

There are good reasons for this: Even the best LLMs, or large language models, still hallucinate information from the patterns they've gleaned. That's especially a problem in smart home tech, where tech specs, models, compatibility, vulnerabilities and updates shift so frequently. That means its easy for ChatGPT to get confused about what's right, current or even real.

Let's look at a few of the biggest mistakes, so you can see what I mean.

Chat AIs hallucinate that Teslas are spying on your home security

Tesla Model S and 3 at rendered bp pulse station

BP's alternative fuels wing is expanding its EV charging presence in the US with the purchase of Tesla DC fast-charging hardware.

BP

Asking a chatbot about specific security technology is always a risky business, and nothing illustrates that quite so well as this popular Reddit story about a chat AI that told the user a Tesla could access their "home security systems." That's not true -- it's probably a hallucination based on Tesla's HomeLink service, which lets you open compatible garage doors. Services like Google Gemini also suffer from hallucinations, which can make the details hard to trust.

While AI can write anything from essays to phishing emails (don't do that), it still gets information wrong, which can lead to unfounded privacy concerns. Interestingly, when I asked ChatGPT what Teslas could connect to and monitor, it didn't make the same mistake, but it did skip features like HomeLink, so you still aren't getting the full picture. And that's just the start.

Chatbots can't answer questions about ongoing home threats or disasters

An answer from ChatGPT about a hurricane's location.

Conversational AI won't provide you with important details about emerging disasters.

Tyler Lacoma/ChatGPT

ChatGPT and other LLMs also struggle to assimilate real-time information and use it to provide advice. That's especially noticeable during natural disasters like wildfires, floods or hurricanes. As hurricane Milton was bearing down this month, I queried ChatGPT about whether my home was in danger and where Milton was going to hit. Though, thankfully, the chatbot avoided wrong answers, it was unable to give me any advice except to consult local weather channels and emergency services.

Don't waste time on that when your home may be in trouble. Instead of turning to AI for a quick answer, consult weather apps and software like Watch Duty; up-to-date satellite imagery; and local news.

LLMs don't have vital updates on data breaches and brand security

ChatGPT's web version answers questions about Ring security.

While ChatGPT can compile information about a security company's track record, it leaves out key details or gets things wrong.

Tyler Lacoma/ChatGPT

It would be nice if AI chatbots could provide a summary of a brand's history with security breaches and whether there are any red flags about purchasing the brand's products. Unfortunately, they don't seem capable of that yet, so you can't really trust what they have to say about security companies.

For example, when I asked ChatGPT if Ring had suffered any security breaches, it mentioned that Ring had experienced security incidents, but not when (before 2018), which is a vital piece of information. It also missed key developments, including the completion of Ring's payout to affected customers this year and Ring's 2024 policy reversal that made cloud data harder for police to access.

ChatGPT answering a question about Wyze.

ChatGPT isn't good at providing a timeline for events and shouldn't be relied on to make recommendations.

Tyler Lacoma/ChatGPT

When I asked about Wyze, which CNET isn't currently recommending, ChatGPT said it was a "good option" for home security but mentioned it suffered a data breach in 2019 that exposed user data. But it didn't mention that Wyze had exposed databases and video files in 2022, then vulnerabilities in 2023 and again in 2024 that let users access private home videos that weren't their own. So while summaries are nice, you certainly aren't getting the full picture when it comes to security history or if brands are safe to trust.

Read more: We Asked a Top Criminologist How Burglars Choose Homes

Chat AIs aren't sure if security devices need subscriptions or not

ChatGPT answering a question about Reolink subscriptions.

ChatGPT can't adequately explain security subscriptions or tiers.

Tyler Lacoma/ChatGPT

Another common home security question I see is about the need for subscriptions to use security systems or home cameras. Some people don't want to pay ongoing subscriptions, or they want to make sure what they get is worth it. Though chatbots can give lots of recipe specifics, they aren't any help here.

When I questioned ChatGPT about whether Reolink requires subscriptions, it couldn't give me any specifics, saying many products don't require subscriptions for basic features but that Reolink "may offer subscriptions plans" for advanced features. I tried to narrow it down with a question about the Reolink Argus 4 Pro, but again ChatGPT remained vague about some features being free and some possibly needing subscriptions. As answers go, these were largely useless.

Meanwhile, a trip to CNET's guide on security camera subscriptions or Reolink's own subscriptions page shows that Reolink offers both Classic and Upgraded tier subscriptions specifically for LTE cameras, starting at $6 to $7 per month, depending on how many cameras you want to support, and going up to $15 to $25 for extra cloud storage and rich notifications/smart alerts. Finding those answers takes less time than asking ChatGPT, and you get real numbers to work with.

ChatGPT isn't the place for your home address or personal info, either

Digital illustration of pick chatbot in front of message screen.

Don't let chatbots know too much about your personal info.

Vertigo3d via Getty

As the famous detective said, "Just one more thing." If you do ever query a chatbot about home security, never give it any personal information, like your home address, your name, your living situation or any type of payment info. AIs like ChatGPT have had bugs before that allowed other users to spy on private data like that.

Additionally, LLM privacy policies can always be updated or left vague enough to allow for profiling and the sale of user data they collect. The scraping of data from social media is bad enough, you really don't want to hand personal details over directly to a popular AI service.

Be careful what data you provide as part of a question, and even how you phrase your query, because there's always someone eager to take advantage of whatever data you let slip. If you think you've already given out your address a few too many times online, we have a guide on how you can help fix that.

Read more: Your Private Data Is All Over the Internet. Here's What You Can Do About It

For more information, check out whether you should pay for more-advanced ChatGPT features, and take a look at our in-depth review of Google Gemini and our coverage of the latest on Apple Intelligence