NEW YORK – The use of artificial intelligence (AI) applications in real estate today is nearly ubiquitous, ranging from MLS listing input tools and automated customer interactions through chatbots to predictive analytics that identify potential buyers and sellers. These AI tools can provide valuable insights, help improve lead capture and conversions and ultimately give agents more time to spend with their clients.
Everyone has tech troubles from time to time. Our experts are here to help you with everything from software upgrades to diagnosing computers.
If you have already discovered some of the benefits that AI delivers, you are not alone. According to a recent AI Survey, nearly nine in ten real estate brokerage leaders report their agents actively using AI tools. While these tools offer undeniable advantages, the dark side of AI unknowingly could cause you to make mistakes – or worse – lead you into unexpected dangers.
Knowing the dark side of AI exists, what can you do to practice responsible and safe AI? Here are a few tips:
Know that AI can lie
Many real estate agents use chatbots to create content, from property descriptions to blog content to social media posts. However, a study by the Tow Center for Digital Journalism at Columbia University tested eight AI search engines. They collectively provided incorrect answers to more than 60% of queries.
This problem is compounded by the fact that chatbots are terrible at declining to answer questions they can’t answer accurately. Instead, they will hallucinate (makeup answers), as the goal of a chatbot is to please you by answering your questions.
Recognizing errors in a chatbot reply is complicated because the incorrect answers are given confidently. Chatbots also cite facts and research that do not exist and sometimes even offer fabricated internet links.
Pro tip: When using AI-generated content, fact-check everything before publishing. If an AI tool provides a statistic, research the source. Never assume AI-generated links or citations are real: always verify them before sharing them with clients.
AI can amplify bias
AI tools learn from existing data – much accumulated from the internet, but the internet has no editor. The data AI is trained on isn’t always fair or neutral. When AI is trained on biased information, it can ignore legal and regulatory requirements, so real estate agents must be especially cautious when using AI-generated content.
One example is that some AI-driven mortgage and tenant screening tools unintentionally discriminate. A report by the University of Chicago highlights how AI tools have systemic biases, particularly against low-income individuals and people of color. Used for tenant screening and mortgage applications, a built-in bias can lead to discriminatory outcomes that disproportionately reject applicants of color, even when they have similar financial profiles to white applicants.
Pro tip: If you use AI chatbots – versus AI systems built into your MLS – to create property descriptions, always carefully verify that the word choices follow Fair Housing and HUD guidelines and do not exclude or disadvantage any group. Better yet, use your MLS’s AI that automatically eliminates non-compliant language.
AI is powering more sophisticated scams
AI is opening up new avenues for fraudsters. Scammers use AI to create convincing fake documents, impersonate property owners, and facilitate deed fraud.
What AI can do in the wrong hands is alarming, making it more difficult for agents to detect a scam. For example, AI was used to impersonate a company’s chief financial officer during a video conference call, tricking a finance worker into transferring approximately $25 million to the crooks. AI convincingly recreated the appearance and voice of the CFOs, making it challenging for the worker to discover the fraud until later.
Pro tip: Always verify identities through various channels before proceeding with transactions, particularly with remote or high-value deals. Request live in-person meetings, and when that’s not possible, use tools like Forewarn or People Safe, smartphone software that allows you to quickly verify client identities, provide background checks, and property ownership details. Using these apps can help prevent fraud and protect your transactions.
AI tools you use may not be secure
While agents commonly use chatbots for writing content, not all AI tools have built-in security measures to protect sensitive data.
Most free chatbots store and analyze the information entered, meaning anything you type could be logged, reviewed, or even used to train the model further. This can create a serious risk if an agent unknowingly enters personal information as part of their query.
Even premium AI tools designed for business use can have vulnerabilities. If an AI tool lacks strong data encryption, secure authentication, or compliance with industry standards, sensitive information may be at risk.
Pro tip: Never enter personal, financial, or client-sensitive information into an AI chatbot. Assume that anything you type into a chatbot is not private: treat it like a public forum, and you’ll protect yourself and your clients.
Practice safe and responsible AI
Being aware of both the benefits and dangers of AI can help you use it safely and responsibly. Think of AI as an assistant, not as a standalone decision-maker.
The key takeaway? Embrace AI, but don’t be blinded by it. Your expertise, ethics, and gut judgment are still your most valuable tools.
Whenever in doubt, reach out: Tech Helpline’s highly-trained analysts are available to help you successfully navigate the bold – and ever-evolving – world of AI, helping answer your questions and concerns.
Source: Tech Helpline
© 2025 Florida Realtors®