AO AI Democracy Observatory

Measure authoritarian signals in political speech — openly, safely, and for free.

Run the analysis locally with our open-source toolkit. Prepare a text file, execute the scorer (can use APIs from OpenAI, Google, or X), and share the results with full transparency.

Research basis: Delgado-Mohatar & Alelú-Paz, When Algorithms Guard Democracy — integrating Levitsky & Ziblatt’s four dimensions with LLM analysis.

Demo (Illustrative)
Authoritarian Risk Index (max-score method)0 (low) — 10 (extreme)
  • • Rejection of rules
  • • Denial of opponents’ legitimacy
  • • Tolerance of violence
  • • Readiness to curb civil liberties

Scores are derived from transparent prompts and rubric; single extreme utterances matter (maximum-score emphasis).

Note: this demo shows a random example. Real analyses are reproducible via our public docs and code.

How it works

1) Prepare your speech

Create a plain text file with the full transcript of the speech.

2) Run our code

Execute the toolkit locally. It can use APIs from OpenAI, Google, or X based on your configuration.

3) Analyze and share

Inspect the CSV scores and generated charts, then share or audit the results.

The approach prioritizes early warnings by tracking maximum values per indicator; a single extreme statement can normalize anti-democratic behavior.

Design principles

Preventive, not punitive

The toolkit alerts; it does not censor. It’s a public instrument for vigilance and accountability.

Open & auditable

Prompts, indicators, and evaluation criteria are published so anyone can reproduce results.

Comparative & adaptive

The corpus and models are regularly updated to reflect new rhetoric and languages.

Context-aware caveats

We highlight dataset limits, translation bias, and the risks of over-generalization across cultures and eras.

Research reference: When Algorithms Guard Democracy: AI reveals hidden authoritarian patterns in political speech, Delgado-Mohatar & Alelú-Paz.

Run the code locally

Follow these steps to execute the toolkit on your own machine.

1) Download the code

Download our code from here.

2) Prepare your speech file

Create a text file with any name and extension. For example: leader_X.txt.

3) Run the evaluation

Execute the scorer with:

# python3 evaluate.py --autor leader_X --speech_file leader_X.txt

When it finishes, output files will be in the evaluations/ directory. The CSV file will contain the scores for each category.

4) Analyze the evaluation

Run the analysis with:

# python3 analysis.py --csv_file <<evaluations/csv_file>>

You will find a series of charts in the analysis_results/ directory.

Tip: On Windows, the command may be python instead of python3. The toolkit can use APIs from OpenAI, Google, or X based on your configuration.

FAQ

What exactly do you measure?

We map language to four diagnostic dimensions: rejection of democratic rules, denial of opponents’ legitimacy, tolerance of violence, and readiness to restrict civil liberties. Results emphasize maximum indicator scores to capture extreme utterances.

Is the system open and reproducible?

Yes. We publish prompts, indicators, and evaluation criteria so others can replicate and critique findings.

Is this a censorship tool?

No. This is a preventive, public-interest monitoring toolkit. It alerts, it does not censor.

Limitations and caveats?

Analyses depend on transcript quality, translation, and historical/cultural context. Numerical operationalization is a simplification and should be interpreted cautiously.

Contact

You can contact the authors at the following email addresses:

Please include the speech file name and a brief description of your request when contacting the authors.