German Podcast Episode #216: Rahuls Schlüsselerfolge als Senior IT Counsel seit 2010
6 Minuten
Beschreibung
vor 4 Monaten
Neha: Rahul, let's start with the basics: Why
are IP risk assessments even necessary for AI software?
Rahul: Because even tech giants stumble brutally
here. Take Google v. Oracle – the Supreme Court case in 2021. Ten
years of legal battles, just because Google used Java API code in
Android without prior IP risk checks.
Neha: What was Google's specific
miscalculation?
Rahul: They assumed API structures were freely
usable. A risk assessment would've shown: Licensing or custom
code development was necessary. An expensive lesson!
Neha: You also mention Clearview AI – what went
wrong there?
Rahul: The startup scraped billions of social
media images for facial recognition AI. Zero legal vetting!
Result: €20M fines in Italy and France, plus ACLU lawsuits under
Illinois' biometric law.
Neha: What would your method have identified
here?
Rahul: Two core risks: First, lack of user
consent – clear GDPR violation. Second, breach of platform ToS
and photo copyrights. Facebook even sent them cease-and-desist
letters over this.
Neha: Let's jump to your case study. The
healthcare startup example?
Rahul: Yes! Their AI diagnostic software had two
bombs: Training used a competitor's proprietary datasets –
copyright minefield! And they processed patient data without
adequate safeguards.
Neha: How exactly did you address the risks?
Rahul: Three-phase approach: First, license
competitor data or switch to public datasets. Second, full
anonymization of patient data. Third, patent the AI algorithm for
"freedom-to-operate".
Neha: "Freedom-to-operate" – could you explain
that?
Rahul: Sure! It checks if existing patents cover
the AI tech. If yes: Design around them or license to avoid
willful infringement. That's what we secured here.
Neha: And the outcome?
Rahul: The competitor sued for data misuse – but
we produced the license! This prevented not only the lawsuit but
also potential trade secret claims.
Neha: What about data privacy?
Rahul: During GDPR audits, the tool passed
thanks to privacy-by-design: Synthetic data minimized real
patient info. Without this, Clearview-style fines
loomed.
Neha: You mention open-source components – what
was the risk?
Rahul: I found non-compliant parts. Overseeing
this could've led to scenarios like Jacobsen v. Katzer: Where GPL
violations caused injunctions and damages in 2008! Or BusyBox
lawsuits catching companies off guard.
Neha: Which legal areas does such an assessment
cover?
Rahul: Four pillars: 1) Copyright (third-party
code/data), 2) Patent law (freedom-to-operate), 3) Open-source
licenses, 4) Data privacy – especially DPIAs for high-risk
processing under GDPR, HIPAA risk assessments for health
data.
Neha: How do regulators respond?
Rahul: The EDPB explicitly recommends "AI impact
assessments". The EU AI Act will mandate them for high-risk AI –
see Annex on risk management.
Neha: And in the US?
Rahul: The FTC warns in "Using AI the Right Way"
(2020): Unvetted AI for bias/security is unfair! Even the FDA
requires hazard analyses for medical AI.
Neha: You call it "Legal AI Audit" – why is this
becoming standard?
Rahul: Because it proactively meets regulatory
demands. Microsoft demonstrates this: In 2019, they rejected a
facial recognition project over minority rights concerns. This
prevents future scandals.
Neha: So it's like a fire drill for legal
teams?
Rahul: Exactly! We uncover hidden landmines –
copyrights, patents, licenses, privacy gaps – and defuse them
pre-launch. Otherwise, you end up in Google's 10-year war or with
GDPR mega-fines.
***
Read German text here:
https://docs.google.com/document/d/1oEspwKpwMcjlN5BkId5-KTNIs7pywqDbp8g1lYnU2fg/edit?tab=t.0
**
Weitere Episoden
12 Minuten
vor 3 Monaten
12 Minuten
vor 3 Monaten
7 Minuten
vor 3 Monaten
10 Minuten
vor 3 Monaten
22 Minuten
vor 3 Monaten
In Podcasts werben
Kommentare (0)