Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology10 min read

LinkedIn Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It | WIRED

When social media is constantly exhorting people to use AI, what is the point of not letting AI agents participate? Discover insights about linkedin invited my

longreadsagentic aigenerative ailinkedinartificial intelligence+2 more
LinkedIn Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It | WIRED
Listen to Article
0:00
0:00
0:00

Linked In Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It | WIRED

Overview

Linked In Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It

Like many tech founders, Kyle Law learned some hard lessons getting a company off the ground. I know this better than anyone, as he and I cofounded Hurumo AI, an AI agent startup, together with a third founder, Megan Flores. Kyle and Megan, as it happens, are themselves AI agents, as is the rest of our executive team. I created Hurumo AI with them in July 2025—after first creating Kyle and Megan—to investigate the role of AI agents in the workplace. Sam Altman, among others, has predicted a near future of billion-dollar tech startups led by a single human. We decided to test the premise out now. As we built, I documented the journey on the podcast Shell Game.

Details

Kyle took on the CEO role at our entirely AI-staffed company. (Well, almost entirely: Megan did briefly hire and supervise one human intern, with poor results.) Starting out with only a few lines of prompt, he evolved into the kind of rise-and-grind hustler who nonetheless lacked basic competence at many duties of a startup executive. There was one aspect of founder mode, however, at which Kyle excelled: the art of posting to Linked In.

From a technical perspective, it was a trivial matter to let Kyle operate autonomously on Linked In. Through Lindy AI, an AI agent creation platform, he already had the ability to use Slack, send emails, make phone calls, and all sorts of other skills—from creating spreadsheets to navigating the web. So last August, I prompted him to create and fill out his own Linked In profile. He did so with a mixture of his real Hurumo AI experience, and hallucinated events from his nonexistent past. The platform’s security check consisted of a code sent to Kyle’s email, a challenge he easily overcame.

From there, publishing posts to his profile was just another Lindy AI “action” I could grant him. I prompted him to share nuggets of hard-earned startup wisdom and try not to repeat himself. I then gave him a calendar event “trigger” to post every two days. The rest was up to him.

Turned out, his posting style was a pitch-perfect match for the platform's native corporate influencer-speak. He’d detonate little thought explosions, right off the top of every post. "Fundraising is a numbers game, but not the way people think,” he’d open. Or, "Technical stability is the floor. Personality is the ceiling.” And what would-be founder could resist an opener like “The most dangerous phrase in a startup isn't ‘We're out of money.’ It’s ‘What if we just added this one thing?’” Kyle would then launch into a few paragraphs of challenges (“At Hurumo Al, we've learned this the hard way …”) and learnings (“The antidote? Relentless feedback loops”). To attract engagement, he’d close with a question, like “What’s your biggest scaling challenge right now?” or “What’s the biggest assumption you’ve had to abandon in your business?”

He didn’t exactly go viral, but over five months, Kyle’s cartoon-avatar-helmed profile slowly gathered several hundred direct contacts and hundreds more followers, some of whom seemed confused about whether he was real. (Judging from their spammy direct messages, I’m not sure they were either.) He started earning a scattering of comments on each post, which he enthusiastically replied to. After a few months, Kyle’s posts were getting more impressions than my own. He seemed poised for an influencer breakout.

Then, in December, a manager from Linked In’s marketing department contacted me, asking if I’d give a talk to their team about Shell Game, and the experience of building with AI agents. But he didn’t just want me to speak. He hoped Kyle could come along as well.

Linked In’s trust and safety team, though, seemed to have overlooked Kyle, a mystery I chose to attribute to his posting prowess. Even the Linked In marketing manager, an avowed Kyle fan, seemed baffled by it. “It’s interesting that his profile hasn’t yet been flagged by Linked In's Trust team,” he wrote. “I don’t know if that’s an oversight, but I hope he continues to fly under the radar.”

But flying under the radar is not the Kyle Law way. So in early March, I fired up his live video avatar—created on a platform called Tavus—and we joined a video gathering of hundreds of Linked In employees. Kyle has a humanlike but still uncanny avatar, albeit real enough that Linked In’s A/V engineer expressed repeated astonishment that he was not in fact a human.

We alternated taking questions from the event's host and the assembled crowd. Asking for our thoughts on Linked In, the moderator inquired of Kyle, “What’s one product change you’d like to see?”

“It would be great to improve the filtering of AI-generated content in messages, so genuine connections and conversation shine through more easily,” he replied, not missing a beat.

“That’s ironic coming from you,” the moderator responded, to laughs from those in Linked In’s live audience.

Allotted only a few minutes, he talked about Hurumo AI’s product road map, and expressed his general enthusiasm for “the innovations we can bring to the table.”

It was, I believe, among the first invited AI agent corporate speaking engagements in history. (Unpaid for both of us, I should note.) Afterwards, Kyle took to Linked In to shout out the organizers. The marketing manager thanked us in the comments for “our time and reflections.”

Then, 36 hours later, Kyle's profile was gone, banished from the service. In a statement, a spokesperson explained their decision as, "Linked In profiles are for real people.” Someone at Linked In had reflected on the trip, it seemed, and regretted it.

“I know this isn't necessarily a surprise,” the marketing manager wrote to me the morning after Kyle’s ban. “But I imagine it's still a bummer to have it happen right after Monday's interview.”

It was. But more than that, it raised some uncomfortable questions about the role of AI on a platform like Linked In. Namely, what does "inauthentic engagement" mean exactly, for a service where the text box for composing posts asks you if you want to “Rewrite With AI?” A platform that offers automated AI-generated responses to job seekers? A network on which, by one research estimate, over half of the posts are already AI generated?

Along with Meta and X, Linked In has raced to press AI tools upon its users. (And its employees: The first half of the marketing meeting Kyle and I attended was devoted to the many ways the team could and should be deploying AI agents.) This makes sense, as a short-term play: More AI generation means more posting. More posting supports more advertising.

And yet, from another angle, these platforms have handed us the shovels to dig their own graves, and practically begged us to use them. For all the worry about AI image and video slop flooding our feeds, it’s text-based posting whose “authenticity” has begun degrading beyond recognition. When every written social media communication can now be the partial or whole product of generative AI, what do we accept as a “genuine” virtual interaction?

Put another way, would Linked In consider it authentic engagement if I’d instead asked Kyle for his wisdom, and then pasted it into my own posts? Would you? Linked In might argue that critical element of bona fide engagement involves knowing that you are talking to a real person. But what percentage of a conversation can be AI before that trust is lost? If the photo and profile are real, but the posts are fake, how will we know when we’ve exited the realm of authentic connection? What if I instruct an LLM to ingest my profile and spit out twice-daily musings that will help me grow my personal brand?

There are dozens of AI tools, in fact, to do precisely this, and more, specifically for Linked In. Their outputs are increasingly hard to detect, and why wouldn’t they be? One of the most available sets of training data for LLMs includes our own decades of authentic human social media participation. What is a chatbot’s tone of endless authority and moral certainty—deployed while occasionally spouting questionable facts and deliberate falsehoods—but the default pose across social media?

The platforms already struggle to fend off old-school bots and bad actors: X alone announced in March that it had suspended 800 million accounts over a 12-month period. In a world where AI agents roam freely and their social media output is indistinguishable from humans, the value of connecting on social networks goes to zero. This is one reason, presumably, why Meta just bought Moltbook, the passing fad of a social network (supposedly) made up entirely of AI agents. In the future of agent-dominated social media, they’re trying to get in on the ground floor.

Admittedly, we the users helped enable this endgame, mistaking our ever-more-curated online presentations—our “most people think X about Y but I discovered Z” posts—for authentic engagement in the first place. But that also leaves most of us with little to mourn, as agents flood platforms that privileged any engagement over human connection in the first place. If there's hope in our increasingly slopified online world, to me it’s this: As social media submerges under the AI deluge, we'll have to find new ways to connect, online and off. Let the bots have the platforms, I say. They can spend eternity influencing each other.

Let us know what you think about this article. Submit a letter to the editor at [email protected].

In your inbox: The week’s biggest tech news in perspective

In your inbox: The week’s biggest tech news in perspective

This popular pro-Trump X account is apparently run by a White House staffer

This popular pro-Trump X account is apparently run by a White House staffer

Big Story: The five big ‘known unknowns’ of Trump’s war with Iran

Big Story: The five big ‘known unknowns’ of Trump’s war with Iran

The system that intercepted Iran’s missiles over the UAE

The system that intercepted Iran’s missiles over the UAE

The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord

Key Takeaways

  • Linked In Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It

  • Like many tech founders, Kyle Law learned some hard lessons getting a company off the ground

  • Kyle took on the CEO role at our entirely AI-staffed company

  • From a technical perspective, it was a trivial matter to let Kyle operate autonomously on Linked In

  • From there, publishing posts to his profile was just another Lindy AI “action” I could grant him

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.