2025-11-18 | IT Law

1. Limited Insight Into Legal Processes
AI is often capable of providing concise answers to specific questions or presenting the logic of a particular issue. However, it lacks the ability to understand an entire legal process or assess long-term consequences. The legal aspects of a case often unfold through multiple stages, before various judicial or administrative bodies, sometimes over the course of several years. AI is not capable of adequately reflecting this complexity: it may successfully assist in moving from “point 1 to point 2,” but it cannot comprehend the multifaceted path that leads from “point 2 to point 10.”
It must also be noted that a truly skilled lawyer is not characterised merely by knowledge of the applicable legal rules, but by the ability to realistically assess the potential outcome of a matter, consider alternative avenues, and recognise when it is advisable to litigate or to settle. This requires a pragmatic and realistic approach—something that current AI models cannot reliably provide, as they often present answers that the user wants to hear rather than what the user needs to hear.
2. Fabricated Sources and Incorrect References
One of the most serious problems associated with generative AI is so-called “hallucination”: the system frequently cites legal provisions, academic articles, or court decisions with full confidence—even when these sources do not actually exist. This is particularly dangerous in the legal field, where the precise wording of a single statutory section may be decisive. Clients may easily be misled into believing that they have received well-founded and verified information, when in fact the citation is false or entirely fabricated.
Compounding this issue is the fact that AI systems often rely on open internet sources of highly variable reliability. AI is not necessarily capable of distinguishing between a professionally credible legal analysis and a casual posting on an informal online forum. As a result, the generated text may contain a mixture of accurate observations and entirely erroneous information, which poses a significant risk to clients.
3. Absence of Liability
Developers of AI systems explicitly exclude liability for the accuracy of content generated by their models. A well-known example is the widely publicised incident of August 2024, where misinformation concerning a chemical substance (sodium bromide) contributed to a poisoning event. OpenAI immediately emphasised that, under its Terms of Service, it accepts no responsibility for the system’s responses. This clearly demonstrates that using AI in legal matters is entirely at the user’s own risk.
By contrast, a lawyer assumes legal responsibility for the advice provided—offering a level of protection and security for the client.
4. The Advantages of AI – When Used Properly
Despite the risks, it would be a mistake to oppose the use of AI entirely. When applied with caution and sound judgment, the technology can be a valuable tool:
-
it can help clarify basic legal concepts,
-
summarise relevant legal fields,
-
highlight possible directions or approaches, and
-
save time during the initial information-gathering stage.
Thus, AI may serve as an excellent starting point—but it can never replace personalised legal advice given under professional responsibility.
5. Conclusion
Artificial intelligence is one of the most significant technologies of the future, and its development should not be hindered.
We do not wish to discourage our clients from using such tools: they may be safely consulted for quick guidance or for navigating the complexities of the legal system.
However, we politely but firmly recommend that in all serious legal matters, disputes, or situations requiring important decisions, clients should turn to a lawyer. AI may provide orientation, but it cannot substitute legal advice grounded in responsibility, professional experience and human judgment.
(The image used in this article was generated by the ChatGPT AI engine.)
If you require legal assistance, please feel free to contact our expert colleagues.

