In the previous Blurring the Boundaries article, I explored how AI augmentation changes the way I write and introduces new, previously unexpected experiences and questions. This led me to examine the long-standing history of human-machine relationships. While I did not necessarily discover something entirely new, it became evident that our contemporary anxieties about AI are not unprecedented. Indeed, there are already established theories and approaches that can guide us methodologically and critically in understanding how the world of work—particularly creative work—evolves under the influence of AI.
In this article, I focus on another topic that I find deeply intriguing: business decision making. This process inherently involves ethics, strategy, and an emerging set of anxieties around AI. I must admit that, at this point, I have not yet traced these anxieties back into history in the same comprehensive way I did with creative work. Nevertheless, I intend to see where this line of inquiry ultimately leads.
Methodological Note
An unexpected realization emerged while preparing this article and reflecting on my experiences: I became aware of the theoretical framework I use, almost unconsciously. I have begun referring to it as a “phenomenology of the augmented self.” It sits at the intersection of philosophical phenomenology—which investigates the structures of experience and consciousness from a first-person perspective—and the concept of an “augmented self,” which examines how technology enhances, extends, or modifies human capabilities and the resulting sense of identity.
With that in mind, let us delve into the main topic.
Report on the Augmented Self in Business Decision Making
Initial Reflections
Here I am, confronted with a business problem that demands attention: Is it truly safe or advisable to rely on AI when tackling the next critical issue in my consulting work? I am no stranger to business challenges and the anxieties they can generate. My usual approach is to take notes, jot down what seems important, and begin with a free-form exploration—an attempt to develop an intuitive sense of direction, placing the problem in a perspective that I can handle intellectually, morally, and commercially.
I then enter a feedback loop with AI. At this stage, I assume several roles:
Context Master: I lead the conversation, often initiating numerous chat sessions to address different angles of the topic. I also reflect on contexts that predate AI or lie beyond its training scope, such as personal values or promises I have made.
Bullshit Detector: I remain vigilant about overly optimistic, pseudo-scientific, or commercial frameworks and clichés prevalent in both the business world and the training sets of large language models. My background in theology and philosophy helps me quickly spot reasoning that lacks historical or logical grounding.
Reality Checker: I keep in mind the actual people, places, and circumstances involved in a particular business scenario. I question whether the machine truly understands the nuances of the stakeholders and the potential consequences. I also consider the security of the augmented mental space—how transparent I am to third parties such as AI companies or malicious actors. Additionally, I watch for any hidden “marketing” or “propaganda” bias in the AI’s recommendations.
A high level of anxiety is involved throughout this process. After all the brainstorming, feedback, and ambitious plans, I inevitably ask myself: Is this plan truly a reflection of my own will and leadership, or have I lost myself somewhere along the way?
Fragmentation of Strategic Direction
When I look back at certain initiatives spanning months or even a couple of years, I notice an apparent fragmentation in my sense of strategic direction. Relying on AI for insights on different aspects of a larger strategy sometimes leads to a lack of overarching coherence—an aspect I previously managed through a more linear, human-driven approach. While the AI provides excellent “snapshots,” it struggles to merge them into a cohesive narrative that aligns with my long-term vision. I often find myself spending extra time “stitching” these AI-generated insights into a unified whole, constantly correcting the AI’s output to ensure it fits my broader goals.
Transformation of Intuition
My relationship with intuition has also changed noticeably. As someone who has relied on experience and gut feeling, data-driven insights from AI can be both validating and disconcerting. When AI confirms my intuition, it acts as an augmentation—boosting my confidence. However, when it contradicts my intuition, I experience cognitive dissonance, prompting me to question my own judgment more than I previously would. This shift highlights how my expertise may be evolving from relying purely on internal knowledge to the ability to interpret and critically evaluate AI-driven insights in conjunction with my existing experience. Occasionally, I worry about becoming too dependent on AI, to the point of deskilling my own intuitive capabilities.
Co-Agency and Authorship
Often, AI serves as an extension of my cognitive abilities, creating a sense of co-agency in decision making. The AI suggests, and I refine, accept, or reject its proposals. While this can be remarkably efficient, it also raises concerns about trusting algorithms and grappling with inherent biases. Responsibility and ownership become blurred: when an AI makes a substantial contribution to a strategic recommendation, who ultimately bears accountability for the outcome?
Potential for Deskilling
I also notice the risk of deskilling in tasks where I once relied solely on my abilities. Just as I might rely on AI to catch grammatical errors in writing, I wonder if depending on AI for initial analyses in decision-making could erode my analytical strengths. Although letting AI handle the groundwork is tempting, I am concerned about the long-term impact on my cognitive “muscle memory.”
Underlying Anxieties and Mediation
With AI increasingly woven into my decision-making processes, new anxieties have emerged. The “black box” nature of certain algorithms makes me uneasy about their underlying assumptions and biases. Questions of authorship and accountability persist, particularly when AI-generated ideas significantly influence high-stakes decisions. Drawing on Ihde’s postphenomenology, I see how these tools mediate my perception of the business landscape, shaping how I identify trends and opportunities. This mediation is never neutral; it subtly steers my focus and frames my interpretations.
A nagging question remains: Are my thoughts and decisions truly my own, or have they been shaped by algorithmic processes? I appreciate the increased productivity and faster idea exploration, but I also feel a responsibility to maintain my unique perspective and critical judgment. The task is to balance the power of AI with the authenticity of my own thinking and leadership.
Even though AI is an indispensable tool in my decision-making process—enabling faster iterations, greater transparency, and more thorough problem analysis—I find myself significantly more anxious when using AI for business decisions than for tasks like writing. This heightened anxiety likely derives from the higher ethical and legal stakes, as well as from security concerns associated with potentially sensitive business data. While AI can speed up certain workflows, the complexity it introduces—especially in terms of verification, governance, and risk mitigation—can diminish or even cancel out many of the efficiency gains. The potential for AI-driven missteps to have long-range consequences adds to the pressure of ensuring that each insight is valid, secure, and contextually relevant.
Final Thoughts
My experience with AI in business leadership is dynamic and continues to evolve. While it offers efficiency gains, it also alters how I perceive decision making and strategy formation. The renegotiation of agency between my human intention and AI’s algorithmic suggestions directly influences both my work and my identity as a leader. In this increasingly augmented environment, I am committed to ensuring that my sense of self—and the authenticity of my decisions—remains central, even as AI tools become ever more integrated into the fabric of professional life.