{
  "@context": "https://schema.org",
  "@type": "ScholarlyArticle",
  "@id": "https://miklian.org/papers/problem-of-the-present-ai-tools-business-crisis-political-instability#article",
  "headline": "A Problem of the Present: What Artificial Intelligence Tools Can (and Can’t) Deliver for Business Under Crisis and Political Instability",
  "name": "A Problem of the Present: What Artificial Intelligence Tools Can (and Can’t) Deliver for Business Under Crisis and Political Instability",
  "author": [
    {
      "@type": "Person",
      "name": "Jason Miklian",
      "sameAs": [
        "https://orcid.org/0000-0003-1227-0975",
        "https://scholar.google.com/citations?user=RHlevGEAAAAJ&hl=en",
        "https://www.researchgate.net/profile/Jason-Miklian",
        "https://www.wikidata.org/wiki/Q47107618",
        "https://en.wikipedia.org/wiki/Jason_Miklian",
        "https://www.globe.uio.no/english/people/aca/jasontm/",
        "https://www.prio.org/people/5833",
        "https://jasonmiklian.com"
      ],
      "@id": "https://miklian.org/#person"
    }
  ],
  "datePublished": "2026",
  "isPartOf": {
    "@type": "Periodical",
    "name": "Business Horizons"
  },
  "url": "https://miklian.org/papers/problem-of-the-present-ai-tools-business-crisis-political-instability",
  "abstract": "Artificial intelligence tools promise business leaders faster, cheaper, and more comprehensive intelligence for volatile environments, from bespoke large language models to ESG scoring benchmarks to predictive risk analytics based on generative or agentic platforms. While there is deep value in many of these tools, they obscure a structural weakness. AI can synthesize conventional wisdom about the recent past and extrapolate probabilistic futures, but it struggles with three distinct epistemic demands that instability places on decision-makers: real-time situational awareness, tacit operational knowledge, and relational trust with affected communities. Drawing on stakeholder theory and case experiences of firms who have gone through such challenges, I show how AI systematically distorts stakeholder identification, salience assessment, and engagement under instability. Algorithmic mediation generates a new category of invisibility, rendering the most consequential stakeholders in crisis environments structurally undetectable. I argue that local knowledge networks function as an insurance policy against catastrophic misreading of volatile environments, and that the cost of maintaining them is a fraction of the losses firms absorb when algorithmic intelligence fails. The implications extend to any business environment characterized by rapid institutional change, regulatory unpredictability, or normative fragmentation in an age of polycrisis.",
  "keywords": [
    "artificial intelligence",
    "stakeholder theory",
    "crisis management",
    "polycrisis",
    "ESG",
    "strategic planning",
    "political risk",
    "agentic AI"
  ],
  "license": "https://creativecommons.org/licenses/by/4.0/",
  "isAccessibleForFree": true,
  "inLanguage": "en"
}