Brazen abuses and bad actors | This Week in Games
LinkedIn wants your consent to train GenerativeAI models on your data, and it's not waiting for you to give it.
I know the name of the column implies discussion of games, but I'm going to talk about LinkedIn for a bit instead, and not just because half the games industry is on that site looking for a job right now.
I'm going to talk about LinkedIn because this week it gave us yet another clear example of how Big Tech companies are openly hostile to the wishes (and rights) of their users.
On Wednesday, I started seeing social media posts from people who discovered a new option buried in LinkedIn's data privacy settings.
Labelled "Data for Generative AI Improvement," the setting asks if LinkedIn can use your personal data and content posted on the site to train generative AI models. From the moment this setting was added, it was set by default to "On."
As noted by TechCrunch, you can switch that setting to "Off" whenever you want, but a FAQ on the platform's support section says that won't change any training that's already taken place, which you can assume now includes all of your data and content ever posted to the site up until the point where they added this new toggle into the settings.
It's obvious dirtbaggery, but let's spell out the key bits anyway:
● LinkedIn knows users have legitimate reasons to not want their information and output used to train Generative AI.
● LinkedIn therefore knows it should get people's consent to do that.
● LinkedIn knows a bunch of people would not give that consent if it were an opt-in process, or even if it were an opt-out process where users were given a set amount of time to opt-out before the company assumed consent.
● In order to maximize the amount of data it could use to train AI and minimize the impact of users' lack of consent, LinkedIn chose a process that would instantly allow it to feed every scrap of information collected or created from its 23 years as a platform into its Generative AI model, regardless of consent.
This was not apathy or carelessness about what users might want. This was a deliberate act to deny users a choice the company knew full well they should have.
When I say LinkedIn can train its AI on "every scrap of information" on the platform, that's admittedly overstating it a little. LinkedIn didn't pull this nonsense on users in the European Union, Iceland, Liechtenstein, Norway, and Switzerland, because their consumer protection laws are strong enough to dissuade these kind of shenanigans.
Compare that to the US, where the Federal Trade Commission in 2017 busted TV maker Vizio for a similarly sketchy scheme where it used an automatic opt-in so it could collect and sell users' personal information. The punishment? $2.2 million in fines. The "Platform+" segment Vizio built with that ill-gotten personal information made more than $36 million the next year alone, so I imagine the company viewed the FTC's slap on the wrist as just the cost of doing (shady) business.
But even in the US, where LinkedIn felt confident enough that it could just do whatever the hell it felt like with your information without even stealth updating the terms of service to give itself permission first, the FTC is still too strong for the company's liking.
Reid Hoffman is the founder of LinkedIn, and a board member at Microsoft. Hoffman's also a billionaire mega-donor supporting the Democratic Party. In the days immediately after Joe Biden suspended his 2024 presidential bid in July – when there were rumblings about an open convention and some possibility the party would not simply rally around current VP Kamala Harris as its nominee – Hoffman decided that was the time to go public in planning a major fundraising push for Harris... while also publicly stating that she should replace Federal Trade Commission chair Lina Khan. As Hoffman saw it, the head of the agency responsible for enforcing antitrust laws had been "waging war" on American businesses by actually trying to enforce antitrust laws.
Let's look at some of the dispatches from the frontlines of that war. Just a month before Hoffman called for Khan's dismissal, the FTC launched an investigation into Microsoft's $650 million acquisition of Inflection AI, where Hoffman was a co-founder. Khan's brutal campaign of governmental overreach has been a veritable Sherman's March to the C-Suite that has seen the FTC go to court to stop the Microsoft acquisition of Activision Blizzard (unsuccessfully), intervene to prevent the Meta acquisition of VR developer Within Unlimited (unsuccessfully), and ban non-compete clauses in contracts for non-executive roles (again, unsuccessfully).
This isn't to say the FTC has been uniformly ineffective. Its $520 million penalty against Epic Games for using dark patterns and violating children's privacy in Fortnite was a fantastic example of the sort of work the regulator should absolutely be pursuing. This week's report on inadequate safeguards for kids on the largest social media and streaming services also suggests more such actions on the way.
While the FTC's batting average under Khan may leave something to be desired, she's at least attempting to fulfill the organization's mission of protecting the public from the forces of unchecked capitalism. But successful or not, even attempting that is too much for billionaires like Hoffman to stomach.
In a follow-up interview with CNN, Hoffman insisted he doesn't condition his support upon any kind of political action. It's a lot like how a mobster won't tell you he'll burn down your store unless you pay protection money; he merely offers his services while lamenting what a shame it would be if anything were to happen to such a fine establishment.
Hoffman added that having the FTC making acquisitions more difficult is "creating less incentive for investment, for potential competition, because you're blocking [merger and acquisition] exits."
To which I say, "Good."
As I see it, one of the big problems in tech and venture capitalism is that the system doesn't incentivize people to create sustainable businesses. Instead, it incentivizes them to build hype for something that may or may not be viable, and then sell their bundle of problems off to someone else before people discover whether or not there was any "there" there to begin with.
The goal is to cash out at the apex of the hype cycle, often by selling to one of a handful of mega-corps like Microsoft, Meta, or Google. And even if your start-up turns out to be a complete sham, those giants won't be too put out by burning billions on it because a) they can still be obscenely profitable regardless (like we've seen with Meta's ongoing VR/AR losses) and b) it still has value as an investment in pre-empting possible disruptors. (It's much better to have a start-up with genuinely disruptive potential fail on your dime than succeed on anyone else's.)
Venture capital is always going to funnel a chunk of its money toward speculative hype bubbles and predatory business models, because those sort of schemes tend to have more attractive returns on investment than their more responsible and ethical counterparts. Making it harder for a start-up to be acquired by a handful of massive players won't fix that, but it might at least moderate some of the harmful fallout those types of businesses can have on their customers, their employees, and the industries they're trying to disrupt, particularly should they collapse because their offering was mostly smoke and mirrors anyway.
If entrepreneurs knew those kind of exits would be harder to come by and subject to greater scrutiny, perhaps they would focus more on building sustainable stand-alone businesses instead of money pits that only financial giants could afford.
If the FTC and Khan's more aggressive stance towards mergers and acquisitions means fewer exits and more entrepreneurs giving thought to what happens with their business in the long run, that seems better for just about everyone but people like Hoffman.
Working the refs over unpaid whistleblowers
This actually intersects nicely with a follow-up on the column about government whistleblower programs from a couple weeks back.
After that column ran, I had a chat with Siri Turner, executive director of the nonprofit National Whistleblower Center, to ask her why agencies dealing with money have whistleblower programs that pay money for tips leading to large fines or settlements, while agencies that protect the public as a whole (like the FTC) do not.
"The thinking behind a lot of the programs is that people are motivated by money, especially in the financial sector or crimes that have to do with money," Turner said. "So it's a bit of a more natural connection to offer economic rewards for people who come forward with information about financial misconduct."
She says there's "absolutely" a push to bring such whistleblower incentive programs to more agencies and the NWC has been supporting the idea for the FTC. After all, these kind of programs have shown themselves to pay for themselves (and then some), incentivize more compliance with the law, and motivate more people to come forward with valuable information. So why hasn't it actually happened?
"That's unfortunately a very political question that I think has a lot to do with the agencies themselves not having much traction," Turner said. "For example, the [Consumer Financial Protection Bureau] has come under scrutiny, and the FTC right now is not exactly rewarded for being successful, let's just put it that way. So any program that will really supercharge those agencies' ability to take enforcement actions will always be undermined and attacked."
She added, "It's become really common to hear scathing remarks about Lina Khan, who is the chair of the FTC right now. She's been doing a great job, and it just feels like the better she does, the more certain parties are unsatisfied with her...
"She's the chair of the FTC and she's getting pushback from the Chamber of Commerce and corporate interest groups, and those are the same people who attempt to undermine successful programs at the SEC and the IRS."
The Chamber of Commerce has called the FTC "an agency gone rogue" under Khan, and joined with more than 260 trade associations to protest the FTC's rule outlawing non-compete agreements.
"These same corporate entities that are interested in unbridled profits don't want effective regulators, and will both attack effective, long-standing, well-regarded programs like the SEC and IRS programs, but also agencies and their leaders who are doing well without those programs," Turner said.
As for the future of whistleblower incentive programs, Turner suggested people sign up for the NWC mailing list to stay informed, and beyond that to simply be mindful of how people attacking agencies and their leaders may have a vested interest in undermining their ability to protect and incentivize whistleblowers.
"Of course there are times when leaders are rightfully scrutinized – and whistleblowers are often a part of that – but when effective leaders are being attacked simply because corporate interests don’t want them to be able to do their jobs well, we can’t see whistleblowers thrive," she said.
"And whistleblowers are essential to effective regulation."
If you liked this piece, please consider signing up for the Unlosing Writer newsletter. It's free, but if you enjoy the work, I would greatly appreciate a monthly subscription or a one-time tip.