Soul Machines

  • Status

    State
    Next Steps
    Case Date
    Watch Video
    Jurors Accepted
    Juror Verdicts Finalized

    The details, verdicts, and comments within this case record come from its participants. The Court's role is solely to facilitate the case process.

    Copyright © 2022-2026 Bright Plaza, Inc., All Rights Reserved. No one may publish a case, or any part of it, without a clear reference to the link with the case number as in https://www.truthcourt.net/case/<case-id-number>

  • Details

    Name
    Category
    URL
    Accusation
    Lie Truth

     
    Argument
  • Verdicts

    Answer: Yes
    Answer Confidence: 100 %
    Supporting Text:
    If the law around AI is done right then everybody in the world will be richer and happier for it.

    Answer: Don't Know
    Answer Confidence: 90 %
    Supporting Text:
    I cannot help but ponder a new form of enslavement, not of labour but of authority. Will Humans no longer have the right answer to any problem ethical or otherwise? Will they never be trusted again to see more clearly than a machine what the right answer is. The problem today with Humans is that the appropriate solutions are never implemented because of vested interests. I am not going to be persuaded that the small number of of capital owners who will own these machines and lease them out ( this is the information and expertise model we already have) will allow Humans complete control and therefore have the proposed responsibility. Consumers have responsibility over things that traditional laws says they own. Ownership = responsibility. I doubt this will,be the case for AI.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    It is reasonable that users of AI tools should bear some responsibility for how they use them. This already applies in many real-world cases (e.g., misuse of tools or technology).

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:

    Answer: Yes
    Answer Confidence: 50 %
    Supporting Text:
    Yes, but...

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    People should take responsibility for how they use AI tools, just like they do with other tools.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    Totally agree with the statement that people should be responsible for the AI they own or use. AI systems are not independent beings; they depend entirely on human input, instructions, and decisions. This means that whatever the AI produces is ultimately a reflection of the user’s actions. Robert Thibadeau in “Your Soul in That Machine”. He argues that machines carry a reflection of the human “soul” -meaning our thoughts, values, and intentions are embedded in the technology we create and use. AI does not think for itself; it mirrors human reasoning and interpretation of the world.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    Yeah, it's the truth, holding people fully responsible for the AI and robots they own or license just makes sense, like how we already treat cars, tools, or any product that causes harm.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    Yeah, it's the truth, holding people fully responsible for the AI and robots they own or license just makes sense, like how we already treat cars, tools, or any product that causes harm.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    I think this little article tells pretty much the whole truth.

    Answer: Don't Know
    Answer Confidence: 90 %
    Supporting Text:
    Not sure the capital relationship will allow Humans to be responsible for their AI. The AI will,have controls built in to prevent misuse, hence they will be responsible for their actions.

    Answer: No
    Answer Confidence: 90 %
    Supporting Text:
    It leaves out key complexities—such as the responsibility of AI developers, limitations of user control, and risks of autonomous or unpredictable AI behavior.

    Answer: No
    Answer Confidence: 90 %
    Supporting Text:

    Answer: No
    Answer Confidence: 100 %
    Supporting Text:
    Companies whose algorithms damage individuals now face a multi-pronged liability environment: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 1. Product Liability for Design Features: The K.G.M. verdict proves that features like infinite scroll and personalized feeds are now potentially legally viewed as potentially defective products. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 2. Erosion of Section 230 via First-Party Speech: The Third Circuit’s Anderson ruling suggests that algorithmic curation may be viewed as the platform’s own content, removing the statutory shield. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 3. The "Material Contribution" of GAI: Generative AI systems are increasingly being treated as the "speaker" of their own outputs, placing them outside the scope of Section 230. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 4. Judicial Scrutiny of AI Literacy: The Supreme Court is no longer "at sea" when it comes to technology. Individual Justices are developing sophisticated frameworks (e.g., Barrett's "person-in-the-loop") that will define the boundaries of corporate liability for decades to come.

    Answer: No
    Answer Confidence: 80 %
    Supporting Text:
    AI makers and companies also have responsibility for how their systems work and are designed.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    AI is not autonomous but a reflection of human thought, and this is clear in how it functions. AI works for us because we prompt it, guide it, and shape its responses. It does not create meaning on its own it responds based on the input we give it. (This means our values, beliefs, and ways of thinking are embedded in the AI. The machine reflects what we put into it; it does not act independently of us.)

    Answer: Don't Know
    Answer Confidence: 90 %
    Supporting Text:

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    It's nothing but the truth because the article nails it: we put our own values and "soul" into our machines, they work for us and enrich us through services, and the law should target the human owners first, not the core tech makers every time.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    There is nothing in the article that is misleading or without successful precedence in human life today. We are simply continuing the government by law which values free enterprise and liberty.

    Answer: No
    Answer Confidence: 90 %
    Supporting Text:
    The plaintiff suggestion to this verdict is incorrect, the law equates ownership with responsibility. Also free enterprise implies a level playing field which there is not. While there are many AI agents there are a only a few core systems upon which they are based. This is monopolistic not free enterprise.

    Answer: No
    Answer Confidence: 90 %
    Supporting Text:
    The argument includes speculation (e.g., everyone owning humanoid robots and becoming richer), which is not guaranteed or proven.

    Answer: No
    Answer Confidence: 90 %
    Supporting Text:

    Answer: No
    Answer Confidence: 100 %
    Supporting Text:
    See my earlier verdict. This is a CRAZY complex legal topic. My summary above was the tip of the tip of the tip of the iceberg.

    Answer: No
    Answer Confidence: 70 %
    Supporting Text:
    Sometimes problems come from the AI system itself, not just the user.

    Answer: No
    Answer Confidence: 90 %
    Supporting Text:
    No , it places all responsibility on the user alone, which is not entirely accurate. (While users must take responsibility because they guide and use AI, responsibility should also be shared with developers, companies, and regulators who shape how these systems function.)

    Answer: No
    Answer Confidence: 70 %
    Supporting Text:
    Sometimes problems come from the AI system itself, not just the user.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    And it's pretty much the whole truth for the near term, we stay in control by owning our humanoid helpers, humans don't get replaced or taken over, and this personal ownership model beats handing power to big centralized systems.

    Answer:
    There is no deceit.
    Answer Confidence: 100 %
    Supporting Text:

    Answer:
    The deceit is that the lie is misleading.
    Answer Confidence: 90 %
    Supporting Text:
    Thé ecology of AI is different. It is not an artefact.

    Answer:
    The main deceit is over-simplification and future certainty—presenting a speculative future (mass humanoid robot ownership and wealth increase) as inevitable, while downplaying risks, inequalities, and shared responsibility between users and creators
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    There is no deceit.
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    The deceit is that the truth is VASTLY more complicated and requires somewhat specialized knowledge to truly understand.
    Answer Confidence: 100 %
    Supporting Text:

    Answer:
    There is no deceit.
    Answer Confidence: 100 %
    Supporting Text:

    Answer:
    There is no deceit.
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    There is no deceit.
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    There is no deceit.
    Answer Confidence: 100 %
    Supporting Text:

    Answer: Yes
    Answer Confidence: 100 %
    Supporting Text:
    The truth is intended. Hopefully the lawmakers will wake up to the opportunity before they destroy the golden goose.

    Answer: Don't Know
    Answer Confidence: 90 %
    Supporting Text:
    The plaintiff is wavering from the certainty of the proposition. Now we have to hope that the lawmaker will do what is necessary. Earlier the plaintiff said we can continue with the law as it is. I don’t know if the truth was intended as such or as a controversial statement.

    Answer: Yes
    Answer Confidence: 50 %
    Supporting Text:
    The framing appears persuasive rather than neutral, aiming to convince readers of a specific vision of the future and legal approach.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:

    Answer: Don't Know
    Answer Confidence: 90 %
    Supporting Text:
    The plaintiff isn't trying to be dishonest. This is just a hugely complex issue at this time.

    Answer: Yes
    Answer Confidence: 75 %
    Supporting Text:
    The statement is trying to push the idea that users should carry most of the blame.

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:
    The users and owners(inventors/creators) should be held responsible

    Answer: Yes
    Answer Confidence: 90 %
    Supporting Text:

    Answer: Yes
    Answer Confidence: 95 %
    Supporting Text:

    Answer:
    The motivation is to be informative
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    The motivation is to persuade you that the truth is factually true.
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    To promote a pro-user ownership model of AI, where individuals benefit economically and retain control, while shifting legal responsibility primarily onto users rather than developers.
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    The motivation is to be informative
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    The motivation is to be informative
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    The motivation is to be informative
    Answer Confidence: 90 %
    Supporting Text:
    It is trying to explain how responsibility for AI should work in the future.

    Answer:
    The motivation is to be informative
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    The motivation is to be informative
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    The motivation is to be informative
    Answer Confidence: 100 %
    Supporting Text:

    Answer: Acceptable
    Answer Confidence: 100 %
    Supporting Text:
    This true claim is more than acceptable. It is essential that every one on Earth understand it.

    Answer: Don't Know
    Answer Confidence: 60 %
    Supporting Text:
    It is an ideal. Certainly everyone should be responsible for the tools they use, but will the future allow it? Is such a notion an archaicism. After leasehold is a significant part of the economy and matters of responsibility are conflicted.

    Answer: Don't Know
    Answer Confidence: 90 %
    Supporting Text:
    People generally agree with accountability for tool use, but may question the fairness of placing full responsibility on users, especially with complex AI systems.

    Answer: Acceptable
    Answer Confidence: 90 %
    Supporting Text:

    Answer: Don't Know
    Answer Confidence: 90 %
    Supporting Text:
    Everyone on Earth? Probably not. But everybody who's got any role in the AI product design, judiciary, or legal fields? Hell yes.

    Answer: Acceptable
    Answer Confidence: 90 %
    Supporting Text:
    It is normal to debate who should be responsible for AI use.

    Answer: Acceptable
    Answer Confidence: 90 %
    Supporting Text:
    I find it normal for us to be debating about who should be responsible for AIm

    Answer: Acceptable
    Answer Confidence: 90 %
    Supporting Text:

    Answer: Acceptable
    Answer Confidence: 100 %
    Supporting Text:

    Answer:
    No label needed
    Answer Confidence: 90 %
    Supporting Text:
    Get the word out.

    Answer:
    No label needed
    Answer Confidence: 90 %
    Supporting Text:
    Get the word out.

    Answer:
    No label needed
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    No label needed
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    This is a good idea, but the truth is SO MUCH BIGGER than this simple premise.
    Answer Confidence: 100 %
    Supporting Text:

    Answer:
    No label needed
    Answer Confidence: 90 %
    Supporting Text:
    Get the word out.

    Answer:
    No label needed
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    No label needed
    Answer Confidence: 90 %
    Supporting Text:

    Answer:
    No label needed
    Answer Confidence: 100 %
    Supporting Text: