It started like many modern reputational crises do, not with a formal complaint, a regulator’s notice, or a rival’s exposé, but with a short, shareable video clip.
At the India AI Impact Summit, a government-backed event showcasing India’s artificial intelligence ambitions, a staff member from Galgotias University stood before the cameras and introduced a quadruped robot dog. The robot, nicknamed “Orion” at the stall, was presented as something “developed” by the university’s Centre of Excellence.
Within hours, people on social media recognised the machine as the Unitree Go2, a robot dog that is sold commercially by Unitree Robotics.
By the next day, Reuters cited government sources saying the university was asked to leave its summit stall. Later, the video showed the pavilion in darkness after the power was reportedly cut, which drew even more attention to the situation.
In about 48 hours, what was meant to be a high-profile demonstration of AI and robotics turned into a credibility crisis. This incident touched on sensitive national issues like “homegrown innovation,” manufacturing pride, and India’s role in the global AI race.
This wasn’t only about a robot dog. It was about how quickly “innovation theater” can turn into public distrust when a brand overreaches. Once online crowds start forensic verification, the story shifts from intent to receipts.
The setup: a summit built for bragging rights
The India AI Impact Summit was pitched as a flagship moment: a Global South–anchored AI gathering, hosted at Bharat Mandapam, drawing political leaders and global tech executives.
International coverage framed the summit as part of a larger push to brand India as an emerging AI power in software and advanced manufacturing. High-profile names were expected to speak. The event also drew attention for overcrowding and logistical hiccups.
In that atmosphere—big promises, massive footfall, national positioning—exhibitor booths weren’t just marketing. They were symbolic.
And the incentive structure was obvious: every institution wanted a showstopper.
The prelude: the “Rs 350+ crore AI push” as a credibility anchor
Days before the controversy erupted, the university issued a press release touting “more than Rs 350 crore” invested in AI, describing it as a landmark private-university investment meant to operate at a global scale.
Reporting that reconstructed the timeline said the announcement referenced major infrastructure and partnerships, including high-end compute platforms and multiple industry and academic tie-ups. Leadership was quoted in confident, nation-aspirational language.
The number—“Rs 350+ crore”—became aThe number “Rs 350+ crore” became a narrative spine. It created expectations that the booth at a government-backed AI expo would showcase genuine, attributable work: prototypes, research, IP, or at least clearly labelled integrations. Credibility is cumulative. When you claim scale, you implicitly claim governance: vetting processes, communication discipline, and technical clarity.
The crisis that followed did not simply contradict a moment at the booth; it collided with the premise of the entire PR arc.
The spark: “This… has been developed by the Centre of Excellence.”
The defining moment was a short exchange on camera.
In the viral clip, Neha Singh introduced the robot dog as “Orion” and described it as developed by the university’s Centre of Excellence.
The way something is worded matters a lot. At innovation showcases, there are several honest ways to talk about using a purchased platform:
- “We acquired this platform and built applications on top of it.”
- “This is a commercial quadruped; our work is in autonomy software and use-case modules.”
- “We integrated surveillance and navigation routines for research and teaching.”
However, saying “developed by” sounds like you are claiming to have created something from scratch, not just customized it.
Online users quickly noticed that the hardware looked almost identical to the Unitree Go2, a well-known robotics product.
After that, the story spread faster than any university communications team could manage:
- The clip spread.
- A product match circulated.
- Prices and listings were posted.
- Public ridicule followed.
Reuters reported the model was sold for about $2,800 (pricing varies by configuration and market). Other Indian coverage cited local price ranges around Rs 2–3 lakh.
This detail was important not because of the price, but because it made the claim easy to check. When a product can be bought online, it’s easy for people to prove or disprove claims with screenshots.
How “PR by association” can cause extra problems
The controversy grew because the summit was government-focused.
Reuters reported that Ashwini Vaishnaw shared the video clip but later deleted it after facing backlash. Whether intended as a quick promotion or to highlight India’s innovation, this action made the incident a larger political and reputational issue.
Once a government figure is seen amplifying a claim, critics treat the incident as more than a campus-level embarrassment. It becomes a question of institutional vendetta. Opposition politicians soon called it an example of “cheap PR” and said it hurt India’s image abroad. This is how modern PR crises grow: it’s not just about what happened, but also about who seemed to support it. How the university responded: denial, calling it “propaganda,” apologizing, and shifting blame—and an apology and a blame-forward tone
After the backlash, the university responded in several steps, and the order of these responses may have exacerbated the situation.
Phase 1: “We never claimed “it”—the defensive clarification
Multiple outlets reported that the university insisted it had not built the robot and had not claimed to do so, presenting it instead as a teaching and demonstration tool that exposes students to global technology.
This defence can make sense, since many labs buy platforms and then do research with them. But the main problem was that the words used on camera (“developed by”) did not sound like “we bought this and customized it.”
Phase 2: the “propaganda campaign” framing
Reports also said the university called the online criticism a “propaganda campaign,” shifting the tone from explanation to argument.
In crisis communication, this is a common risk: if you accuse others of acting in bad faith, people will work even harder to find proof. Online communities, especially technical ones, respond to arguments by sharing even more evidence. The apology—paired with distancing and internal blame
The university later issued an apology that was unusually specific in its framing: it said “confusion” was created because a representative was “ill-informed,” “not authorised to speak to the press,” and, “in her enthusiasm of being on camera,” gave “factually incorrect information.”
The apology also said the institution had vacated the premises “understanding the organisers’ sentiment.”
This language did two things simultaneously:
- It acknowledged that something went wrong.
- It shifted responsibility to a single representative rather than acknowledging a broader communication problem.
This approach might seem protective at first, but it can cause backlash. The public often sees it as saying, “We’re sorry you’re upset, and it’s her fault.”
In the long run, apologies that blame individuals can also hurt trust inside the organization. Staff may feel that public mistakes put them personally at risk, making it even harder to handle future crises.
The enforcement: asked to vacate, and the power-cut spectacle
Reuters, citing government sources, reported that the university was asked to vacate its stall after the incident.
Indian Express reported that power supply to the pavilion was cut, barricades were placed, and representatives were seen leaving. It also quoted a senior government official arguing that only “genuine and actual work” should be presented and that misleading claims were unacceptable.
NDTV reported that power was “reportedly cut off” at the stall minutes after it was asked to vacate, and referenced PTI video showing staff standing in the dark.
From a public image perspective, the video of the stall in darkness quickly went viral. It transformed the argument over the claim into a public display of punishment, sending a clear message even without any explanation: they had been caught.
Why it hit so hard: the trust equation in AI and robotics
The main harm was not just embarrassment; it was a loss of trust at a time when AI institutions need it most.
AI is already a hype-heavy domain. Everyone knows the incentives:
- Announce big numbers.
- Use futuristic props.
- Claim “revolutionary” initiatives.
- Capture attention in a crowded news cycle.
But people interested in AI are especially focused on checking facts. Many are engineers, students, and researchers who know:
- which platforms are mass-produced,
- what open-source stacks look like,
- How quickly “built” can quietly mean “integrated.”
When a public claim seems to go too far, shifting from “we used this” to “we made this” can prompt swift, harsh criticism.
The Galgotias episode became a textbook case because:
- The product was identifiable.
- The claim was easy to clip.
- The setting was a national showcase.
- The PR buildup (Rs 350+ crore) raised the stakes.
- The response mixed defensiveness, victimhood framing, and blame-shifting.
That mix of factors is how a controversy turns into a fiasco.
What the university likely meant—and why “meaning” didn’t matter
Several later defences suggested the institution’s work was in software, programming, and applications—customizing or building on top of the platform rather than fabricating the hardware.
If that was the intent, it’s not inherently scandalous. Buying a platform and building autonomy modules, perception pipelines, surveillance routines, or interaction behaviors is legitimate educational and research work.
But in communication, being precise is more important than what you meant.
In a camera clip, “developed by” is interpreted as origin. The nuance—“we developed modules on it”—was not present in the soundbite people saw first. Once that first impression hardens, later nuance reads like backtracking.
This is why PR crises arThis is why PR crises often start with just one sentence: the words you use shape how people judge your claim, both legally and morally. en branding outpaces substance
The user’s framing—calling this an “AI plus PR fiasco” points to a bigger issue than just the robot dog. It shows that the event focused more on branding and ultimately made exaggerated claims. The narrative leaned heavily on scale (“Rs 350+ crore”), ambition (global leadership language), and big-stage visibility at the AI summit.
That kind of messaging is not rare. Across India and globally, universities increasingly market “AI centres,” “centres of excellence,” and “innovation hubs” to attract students, partnerships, and prestige.
But there is a basic risk: the more you hype something, the less people will accept any lack of clarity.
If your campaign frames you as a leader, then your demo must be impeccably labelled. If your campaign implies that you build frontier tech, then your booth cannot be anchored by a purchasable product presented as campus-developed.
In other words, when you promote something heavily, there is little room for mistakes.
How it could have been better tackled: a practical crisis playbook
It’s easy to dunk. It’s easy to criticise an institution after a mistake. A better question is, what would a smart, credibility-saving response have looked like before and after the video went viral? A playbook based on what went wrong in public view.
1) Pre-demo governance: label everything like a museum exhibit
If the robot was bought, the booth should have clearly and confidently stated that. would have changed the story:
- Hardware platform: Unitree Go2 (procured)
- Galgotias work: autonomy behaviours, surveillance workflow, campus navigation routines, research modules, student projects
- Goal: teaching + applied R&D, not hardware fabrication
When you are honest from the start, people have less reason to try to catch you in a mistake. okespeople—and enforce a single language standard
In high-footfall expos, anyone can end up on camera. The fix is not “don’t talk to the press,” it’s:
- designate official spokespeople,
- Give them two approved sentences they can repeat.
- Teach everyone else to politely direct questions to the official spokesperson.
If the truth were “we bought the platform and built applications,” the spokesperson script should have been:
“This is a commercially available quadruped robot platform. Our team’s work is in the AI applications we’ve built on top of it for education and research.”
That’s all that needs to be said. Keep repeating it.
3) Crisis response: acknowledge the specific error immediately
Once the clip went viral, the optimal first statement would have been fast, specific, and non-defensive:
- confirm the hardware origin,
- clarify what the university actually built,
- correct the “developed by” phrasing.
- Apologise for the inaccurate wording.
Don’t say “propaganda,” “misinterpretation,” or “we never claimed,” because the video itself is a claim.
A high-integrity version:
“The robot hardware is a Unitree Go2 platform that we procured. Our work is in the AI software and use-case modules demonstrated at the stall. The on-camera wording was inaccurate, and we regret the confusion.”
That would not have fixed the mistake, but it would have stopped the situation from getting worse. apegoating: own the system failure, not the individual
Publicly calling a staff member “ill-informed” and “not authorised,” and blaming their excitement on camera, might seem like damage control, but it shows weak leadership and can appear unfair. The approach is to say:
- “Our internal review process failed.”
- “We’re changing our approval and demo protocols.”
- “We take responsibility.”
That is how organizations keep their credibility.
5) Show your work: publish a technical breakdown within 24 hours
The internet rewards receipts.
If Galgotias truly built software modules, the best weapon was documentation:
- a GitHub repo (even partial),
- a demo architecture diagram,
- a short explainer video showing custom behaviours,
- a clear separation of “platform” vs “our contribution.”
A simple one-page post explaining “What we built” could have changed the conversation from “fake invention” to just “poor wording.”
6) Coordinate with organisers: prevent the “punishment spectacle”
When organizers step in by asking an exhibitor to leave or cutting power, the story starts to look like a punishment.
If the university had moved quickly, it might have negotiated a less dramatic corrective action:
- re-labelling the exhibit,
- issuing a public correction at the booth,
- continuing participation with a compliance note.
Instead, the situation escalated to the point where it appeared someone was being kicked out.repair: separate marketing from research credibility
If the institution genuinely has a major AI investment plan, the recovery path is straightforward but slow:
- publish peer-reviewed outputs,
- Release student project portfolios with clear attribution.
- Highlight collaborations where partner roles are explicit,
- Let results—not props—carry the story.
AI credibility is built slowly and through steady work. lesson: in the age of screenshots, “close enough” is catastrophic
The most important point in Reuters’ report was not the robot dog but how the incident affected the summit’s narrative. It highlighted India’s AI goals and the risks of excessive hype.
In AI, where claims can be inflated and demos can be staged, the public has learned a new reflex: verify first, celebrate later.
And now, fact-checking is done by the crowd, happens instantly, and is often more skilled than what official PR teams can do. The central takeaway of the Galgotias fiasco:
- If your demo is real, label it precisely.
- If your work is “built on top of,” never say “built.”
- If you make a mistake, fix it quickly, take responsibility, and provide proof. Right, the internet with victimhood framing—fight it with documentation.
Because hype travels fast.
But screenshots travel faster.



Leave a Reply