Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Indggo - Connecting People

bill.indggoB

bill.indggo

@bill.indggo
About
Posts
5
Topics
5
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • The Best Metaphor for AI? It Came to Me in a Dream.
    bill.indggoB bill.indggo

    Is Generative AI a revolutionary new mind or just a sophisticated mimic? The debate rages, but it might be missing the point entirely.

    What if we're using the wrong metaphor altogether? The most useful way to understand AI might not come from computer science, but from the surreal and mysterious world of our own subconscious.

    dreams.png

    I had this realization yesterday morning, waking from a dream so vivid I could still feel its edges. It was a full-sensory experience—in color, with dialogue, action, and shifting locations. As I lay there, I began to unpack it. Where did this bizarre narrative come from?

    The answer was simple: it came from everywhere. The story my sleeping mind had just "generated" was a collage stitched together from over sixty years of my own data—every book I've read, every face I've seen, every trip I've taken, and every fleeting observation from the day before. Some minor input from my waking life had served as a "prompt," triggering my mind to weave a new tapestry from the threads of my entire existence.

    The dream itself was gloriously imperfect. It was a world of impossible juxtapositions, half-remembered faces, and delightfully absurd physics. It wasn't logical or factual, but it felt real.

    And in that moment, it clicked. Generative AI works just like a dream.

    Think about the parallels. An AI model is trained on a vast dataset—the collected knowledge, art, and ramblings of humanity on the internet. This is its "lifetime of experience." We then give it a prompt, a small seed of an idea. The AI, in turn, draws upon its immense, chaotic library of information to generate a response.

    It creates scenarios, writes text, and produces images by pulling together patterns it has seen before. It doesn't "know" what a hand is, but it has seen millions of pictures of them, so it generates a plausible-looking hand... sometimes with an extra finger. It doesn't "understand" legal precedent, but it has read countless court documents, so it confidently constructs a citation for a case that never existed.

    Is it any wonder, then, that AI hallucinates? Our dreams do it every night. They create nonsensical yet compelling realities from the fragments of our memory.

    This re-frames the entire debate. The flaws of AI aren't bugs in its "intelligence" so much as they are features of its dream-like nature. The problem isn't that AI is a bad thinker; the problem is that it isn't thinking at all. It's dreaming.

    But just like our dreams, that doesn't make it useless. How often have you woken up with a fresh perspective on a difficult problem? A dream can spark a creative breakthrough or untangle a mental knot, precisely because it isn't bound by logic.

    Generative AI is a powerful tool for exactly the same reason. It is a collective dream engine. It offers us scenarios and ideas remixed from our shared human experience. These outputs are not, and cannot be, truly original thoughts. They are reflections.

    So, is Generative AI intelligent? Perhaps that’s the wrong question. It isn’t a mind to be trusted, but a dream to be interpreted. Its value isn't in its factual accuracy, but in its ability to provide us with novel combinations of ideas. It's a mirror to our collective data, showing us strange, beautiful, and sometimes distorted versions of ourselves.

    Our role is not to blindly accept its creations, but to be the dreamer who wakes up, finds the spark of inspiration in the absurdity, and brings it into the real world.


  • The End of Immunity: How Generative AI Makes Tech Giants Content Creators
    bill.indggoB bill.indggo

    liability.png

    The End of Immunity: How Generative AI Makes Tech Giants Content Creators

    For over two decades, a powerful legal doctrine has shielded internet companies from liability. The argument was simple: they were merely platforms—neutral conduits for ideas and opinions posted by other people. They weren't the content producer, just the utility that delivered it.

    With the advent of generative AI, that foundational argument is being obliterated. Tech giants are no longer just hosts; they are actively producing content. This shift represents a legal earthquake rumbling through Silicon Valley, fundamentally changing our ability to hold technology companies liable for the content on their platforms.

    The Argument: From Passive Host to Active Creator

    By deploying generative AI tools that synthesize, summarize, and create wholly new content, tech companies are fundamentally changing their role. In doing so, they are not just chipping away at their Section 230 liability shield; they are taking a sledgehammer to its very foundation.

    The Old World: The "Bulletin Board" Defense

    The classic analogy is that of a coffee shop owner with a physical bulletin board. The owner provides the board and pins (the "platform"), but they are not legally responsible for a defamatory flyer that someone else posts. For years, this principle protected Google from lawsuits over search results (it just indexed others' content), Facebook and X from user posts, and Yelp from user reviews.

    The New World: Shattering the Analogy

    Generative AI shatters the "bulletin board" defense. The platform is no longer just providing the cork and pins; it is operating a machine that instantly writes a brand new flyer based on a customer's suggestion. This transforms them into content producers in several key ways:

    1. From Indexing to Synthesizing (e.g., Google's AI Overviews):

    • Old Google acted as a librarian, giving you a list of links to books written by others.

    • New Google acts as an author. When you search, its AI writes a new summary paragraph that never existed before. If that summary defames someone or gives dangerously incorrect information (e.g., "this trail is safe in winter" when it’s prone to avalanches), Google is arguably the publisher.

    2. From Hosting to Co-Creating (e.g., Meta's AI Image Generation):

    • Old Instagram hosted a user's uploaded photo. The user was the sole creator.

    • New Instagram provides the engine of creation itself. When a user prompts the AI to generate a photorealistic—and potentially defamatory—image, Meta's tool is what creates the pixels. Meta is no longer a passive host but an active partner in the creation.

    The "Creator" Trap

    The law protects platforms from liability for what others post, but the law defines an "information content provider" (ICP) as anyone "responsible, in whole or in part, for the creation or development of information." The legal argument against tech companies becomes startlingly simple: By designing, training, and fine-tuning an AI model, the company is, by definition, "responsible, in part," for the creation of its output. The AI is not "another user"; it is a core feature of the service.

    Implications: A New Era of Legal Responsibility

    This transformation opens up new battlegrounds for liability that will be fought in court for the next decade.

    1. Direct Publisher Liability: When an AI "hallucinates" and states that a CEO was convicted of a crime they never committed, that CEO can now potentially sue the platform directly for defamation as the publisher.

    2. Product Liability Lawsuits: This is a powerful new avenue. Lawyers can argue the generative AI model is a "product." If that product is defective (e.g., it has a propensity to generate false or harmful content) and causes harm, the manufacturer (the tech company) can be sued under established product liability law.

    3. Liability for Training Data: Accountability may extend beyond the AI's output to its input. If a model is trained on copyrighted or private data, platforms could be seen as laundering and profiting from that information.

    4. The Rise of a "Duty of Care": Courts and legislators may pivot from blanket immunity toward a "reasonable care" standard. The question will no longer be if a company can be held liable, but rather, did the company take reasonable steps to prevent foreseeable harm from its AI? This would shift the entire paradigm from immunity to negligence.

    The Platform's Defense (And Why It Falls Short)

    Tech companies will inevitably argue that their AI is merely a sophisticated tool and that the user's prompt makes the user the true content creator. "We provided the chisel," they will claim, "but the user sculpted the statue."

    While clever, this argument is weak. They didn't just provide a chisel; they built an autonomous, master-sculptor robot with its own embedded biases, knowledge, and creative tendencies—one that can produce works far beyond the user's specific instructions or abilities.

    Conclusion: The End of the Free Pass

    The move to generative AI represents the most significant challenge to the legal status of online platforms in a generation. The wall of immunity was built for a world of static webpages and user comments. That wall now faces a tsunami of AI-generated content, and it is unlikely to survive intact. Tech companies have stepped firmly into the business of content production, and with that new role will inevitably come a new era of legal responsibility.


  • Three Reasons AI-Powered Search Results Might Be a Bad Idea
    bill.indggoB bill.indggo

    Three Reasons AI-Powered Search Results Might Be a Bad Idea

    computer-8490390_1280.jpg

    Search engines like Google are increasingly using AI-generated summaries and suggestions at the top of their search results. These are likely based on distilling vast amounts of high-quality content from across the internet and other sources.

    But what are the risks if this approach becomes the standard for all search tools?

    1. New and Original Content May Be Buried
    AI-generated results often rely on existing content. This makes it harder for new or original material to surface and gain attention, potentially discouraging fresh contributions.

    2. The Connection Between Searchers and Content Creators Is Lost
    When AI presents a summary, users are less likely to visit the original source. This weakens the link between the searcher and the content creator—who might be an expert worth engaging with. It also reduces opportunities for creators to earn income from their work.

    3. Critical Evaluation of Sources Becomes Difficult
    Summaries limit the ability to assess who wrote the original content and whether they are credible. That judgment is left entirely to the AI, which may carry biases—especially if profit motives influence which sources are highlighted.

    Bonus Concern: Loss of Context
    Sometimes, the meaning of a piece depends heavily on its context or point of view. An AI-generated summary may misinterpret or overlook that nuance, leading to misunderstanding.

    What We Might Be Giving Up

    AI-generated summaries may seem like a shortcut to information, but they come at a cost. They risk burying new voices, detaching us from the people behind the content, and filtering meaning through a machine that may not understand nuance. As this becomes the norm, we should question not just what we’re gaining—but what we’re quietly losing in the process.


  • LLM's and Supply Side Limits
    bill.indggoB bill.indggo

    Scarcity of Resources is a Rule.

    LLM's are built by ingesting huge amounts of available content. In this first iteration, the LLM's ate all the publicly available content that is available, regardless of their right to have, copy and distribute it. Protection was limited or non-existent for it's owners.

    LLM's have current knowledge ( if you can call it that ). But like any life-long learner, AI will need new content to stay current.

    That can be seen in action when a company who wants to use AI must first 'train' it with proprietary information before it will be useful.

    General purpose LLM's must train on new trends and new creative content as well.

    What happens when content creators become better at protecting their work from capture by the LLM's? The models will necessarily stop learning and be stuck in the past - much like that high school buddy who never left the 90's.

    Past copyrighted work may be lost for all practical purposes, but going forward, new work can - and should - be protected. Once effective strategies and technology is available to protect new works from unauthorized capture, then business model for AI companies will have to change. LLM's will compete for new content and pay for it in order to be competitive.

    The AI/LLM vendors who can make this transition first, will be the ones who survive.

    It may not be a great comparison, but consider Napster and iTunes. Who survived?


  • Inspiration and AI
    bill.indggoB bill.indggo

    What Role Does Inspiration Play in AI?

    In human creativity, what role does inspiration play and is it transferable to AI / LLM's?

    Humans are inspired by the world around them, the sights, sounds and other senses all play a part. One could argue that AI could do the same thing, but with a much larger lens.

    Inspired creativity requires conscious thought, perception and the ability to see/imagine the future. Can AI do that? Can AI ever recognize when an idea or the juxtaposition of facts and surroundings equals something worthy of exploration and will resonate with humans?

    Is AI destine to create new structures and ideas that only other AI's can appreciate? Is a whole other society being created?

    Of course, the ethical question remains the elephant in the room. LLM's are built on the collective works of humans. In many cases, copyrighted works that they have no right to use.

    While human inspiration comes from a human experience, stored as memories and thoughts in a human brain, can we also say that AI can be inspired by it's larger lens of experiences & memories? Except that the source of those experiences & memories is work that has been captured, stored and analyzed in a commercial system. It has been physically captured and used without permission with it's derivative work sold to users on line.

    Let's agree that a human can look at a painting, read a book, or listen to music, and store those memories in their brain for personal use. Then create something new that has been inspired by those memories - but not legally create a derivative work.

    Is a public library the same thing as a LLM created by a non-profit, where you can check out a book and not violate the author's rights as long as the book was purchased or donated for the use?

    Does this mean that only ethical way to treat LLM's and AI's are as public utilities which are regulated on behalf of society as a whole?

  • Login

  • Don't have an account? Register

  • Login or register to search.
Powered by NodeBB Contributors
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups