top of page

The EU AI Act Has a Rational Brain. Balkan Teams Don’t – And That’s Where the Risk Starts

  • Writer: elenaburan
    elenaburan
  • 1 hour ago
  • 6 min read

When we talk about AI risk in Europe, we usually mean models, data and use cases. But there is another layer of risk that is much quieter: how different minds and cultures read the same law.


The EU Artificial Intelligence Act (AI Act) is the first comprehensive legal framework for AI in the world. It creates a risk-based regime: some AI uses are banned as “unacceptable risk”, others are tightly regulated as “high-risk”, and the rest fall into lighter categories with transparency obligations.


From a legal perspective, this makes sense. From a cognitive and cultural perspective – especially in the Balkans – it is more fragile than it looks.

I want to explain why, and why I’m now building IPER Lex / AI-radar prava as an AI-assisted mediator between EU law and local reality.


Four types of intelligence that read the same law differently


For years I’ve been working with a typology of intelligence I call IPER:

  • Homo Intuitivus (HI) – intuitive, visionary, pattern-seeking, strategic.

  • Homo Rationalis (HR) – analytical, rule-oriented, loves structure and control.

  • Homo Ethicus (HE) – relationship- and value-centred, sensitive to fairness and trust.

  • Homo Practicus (HP) – practical, action-oriented, focused on “what do we do on Monday”.


Real people and real teams are always mixtures, but usually one type dominates.


The question I asked myself was simple:


If the EU AI Act were a person, which IPER type would it be? And what happens when this person talks to a very different mind in Belgrade, Novi Sad or Podgorica?


The IPER portrait of the EU AI Act


When you read the AI Act with this lens, a clear profile appears.

  • The dominant type is HR – Homo Rationalis. The Act is full of technical and procedural vocabulary:

    • risk management system, conformity assessment, quality management system, technical documentation, post-market monitoring, harmonised standards, notified bodies…This is the language of control and predictability.

  • A strong secondary layer is HE – Homo Ethicus. The recitals and general provisions speak about:

    • fundamental rights, human dignity, non-discrimination, protection of children, safety, trust, transparency. This is the moral justification: we regulate AI to protect people.

  • HP – Homo Practicus appears in all the “providers shall…” and “deployers shall…”:

    • implement and maintain processes, test systems, keep logs, report incidents, correct non-conformities.

  • HI – Homo Intuitivus is squeezed into a small space:

    • innovation, competitiveness, strategic autonomy, trustworthy AI aligned with Union values. The vision is stated, then quickly packed into rational and procedural obligations.

In other words:

The AI Act is a rationalist legal machine, justified by ethical language, with practical checklists attached – and with intuition and vision compressed into a few paragraphs.

This is not a criticism. It is simply an IPER diagnosis.


The real tension begins when such a document lands in a culture where many founders, engineers and public actors think very differently.


When “trust” means checklists for Brussels and relationships for Belgrade


A few examples where word-level collisions become dangerous:

  1. “Trustworthy AI”

    • In EU strategy papers, this is the big promise: safe, trustworthy, human-centric AI.

    • In the AI Act, “trust” is operationalised through HR & HP:

      • risk management, documentation, audits, transparency duties.

    For a rational legal mind, this is natural. For an intuitive or ethical mind in the Balkans, “trust” is first of all about relationships, intentions and lived behaviour. If you expect relational trust and receive procedural checklists, you feel that something essential is missing.

  2. “Transparency”

    • In ethics, transparency is about honesty and mutual understanding.

    • In the AI Act, transparency often means:

      • labelling AI systems, disclosing synthetic content, providing information in instructions and documentation. 

    Again, nothing wrong legally. But if a Serbian developer or policymaker hears “transparency” and imagines a deep human dialogue, while Brussels means “the correct label and a compliant instruction section”, we get a cognitive gap.

  3. “Human oversight”

    • In everyday language, this is a responsible, awake human who can feel when something is wrong and intervene.

    • In the Act, it becomes a design and governance requirement:

      • define roles, specify when and how a human can override the system, include it in documentation and risk management.

    The ethical and intuitive content of the phrase is translated into HR/HP structures.


Each of these is a small semantic shift. Taken together, they create a whole parallel universe where the words are the same, but the lived meaning is different.


Why this matters particularly for Serbia and the region


Serbia (and other Balkan countries) are now in a phase where everyone “plays with AI”:

  • startups aggregating models,

  • public institutions experimenting with chatbots,

  • universities and professors experimenting with AI in teaching and exams,

  • small companies using AI-based services created elsewhere.


For early-stage, innovation-driven startups – especially those working with intelligence and AI – the first years are shaped much more by brainstorming and intuition than by formal processes.


In Serbia right now, you can literally feel this: founders and small teams improvise, pivot, and test ideas faster than they can write internal policies. Their strongest asset is an intuitive ability to connect dots, sense opportunities and read people.


That also makes them fragile. When a fast, intuition-driven environment suddenly collides with a strongly rationalist regulatory layer, the very space where breakthroughs are born – open brainstorming, “what if?” conversations, bold experiments – can turn into the biggest vulnerability, simply because it doesn’t speak the same language as the law.


Some of the concepts that are central in the EU AI Act – like high-risk AI systems, prohibited practices, systemic-risk models, fundamental-rights impact – already exist in Serbian law, but often:

  • scattered across education law, labour law, public administration, media, data protection,

  • expressed in a very different legal and cultural language,

  • and not explicitly recognised as AI domain.


At the same time, many teams here are led by:

  • Intuitive minds (HI) – vision, pattern, “I feel where this is going”.

  • Ethical minds (HE) – relationships, net, fairness, “what does this do to people?”.

  • Practical minds (HP) – “what works in reality, with limited time and resources?”.


They are then asked to “comply” with a strongly rationalist legal structure written somewhere else, in a different tradition.


The risk is double:

  1. Compliance theatre

    • filling templates, copying risk management paragraphs, labelling things formally –

    • while the real ethical and practical risks in local context remain unaddressed.

  2. Hidden conflicts and missed opportunities

    • local actors may feel that “Brussels doesn’t understand us” and either ignore or quietly undermine the rules;

    • regulators may feel that “the Balkans are irresponsible”, because they don’t see the intuitive and ethical work that is actually being done.


In both cases, everyone loses: innovation, safety and trust.


What I am building: IPER Lex / AI-radar prava


My response to this is not a political manifesto, but an applied research project.

Very briefly – without revealing all the internals:

  • I treat the EU AI Act itself as a “cognitive actor” and run a qualitative IPER content analysis on it:

    • which IPER type dominates in which sections,

    • which key terms are “owned” by Rationalis,

    • where ethical, intuitive and practical words are pulled into rational frameworks.

  • On this basis, I am building a lexical dataset (IPER Lex) in English that:

    • identifies the core legal concepts of the AI Act,

    • tags them by IPER type(s),

    • and records where meanings may drift or split when read in other languages and cultures.

  • In parallel, I am starting to map serbian legal language:

    • where the same or similar concepts live (often in different laws),

    • how they are framed linguistically,

    • which IPER types they activate in local minds.


This will become the foundation for AI-radar prava – an AI-assisted mediator that can:

  • compare the EU and Serbian “legal vocabularies” around AI,

  • highlight where the same word has a different lived meaning,

  • and explain obligations in ways that make sense to intuitive, ethical and practical minds – not only to rational lawyers.


I am deliberately not describing here the full architecture, parameters or all use cases. That is work in progress, and it is not meant to be a generic toy. It builds on years of observing intuitive competences and cognitive styles in real people, long before AI regulation became fashionable.


Why I’m sharing this now


Two reasons.

First, because time is short. Key provisions of the EU AI Act are already in force or will become applicable in the next 1–2 years, especially for prohibited and high-risk systems and for general-purpose models with systemic risk.

Second, because I don’t want to appear out of nowhere with a “finished product”. I want my Serbian and regional network to see that:

  • this line of work exists,

  • it is grounded both in law and in human typologies of thinking,

  • and it is aimed at helping teams become more aligned and less divided by language and regulation.


An invitation


If you are:

  • working on AI products that will touch high-risk domains (education, hiring, public services, health),

  • part of a legal, compliance or policy team dealing with AI,

  • or simply someone in the Balkans who feels the tension between EU language and local reality,

I would love to hear:

  • where you see the biggest misunderstandings between legal and human language,

  • which terms from the AI Act feel most “empty” or confusing when you try to apply them in real projects,

  • and what kind of mediator – human or AI – you wish you had.


I’ll be sharing more fragments of this work as it evolves – including a list of “problematic concepts” where EU and Serbian legal language silently pull in different directions.


For now, if this resonates, let’s keep the conversation open. The law may speak with a rational brain, but our region is rich in all four types of intelligence. We will need all of them to make AI governance here both compliant and truly human.


Related Posts

Comments


bottom of page