The Standard Contract is Not Enough – DTNS 4525

The EU fines Meta a record €1.2B for sending European user data to the US in violation of GDPR. What now for Meta? The Cyberspace Administration of China has warned critical infrastructure operators against buying components from the memory chip maker Micron, citing “relatively serious” cybersecurity risks in its products. And Instagram is planning on releasing a text-based Twitter competitor, codenamed P92 or Barcelona that could arrive as early as June as a separate app.

Starring Tom Merritt, Rich Stroffolino, Justin Robert Young, Roger Chang, Amos, Joe

MP3 Download


Using a Screen Reader? Click here

Download the (VIDEO VERSION), here.

Follow us on Twitter Instgram YouTube and Twitch

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

A special thanks to all our supporters–without you, none of this would be possible.

If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you!

Become a Patron!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit

Send to email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here!


EU Fines Meta a Record $1.3B Over European User Data – DTH

DTH-6-150x150WhatsApp introduces edited messaging, Amazon One announces age verification palm-scanning, Venmo rolls out Teen account support.

MP3

Please SUBSCRIBE HERE.

You can get an ad-free feed of Daily Tech Headlines for $3 a month here.

A special thanks to all our supporters–without you, none of this would be possible.

Big thanks to Dan Lueders for the theme music.

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods, KAPT_Kipper, and PJReese on the subreddit

Send us email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here.

Prohíben usar TikTok en Montana – NTX 314

YouTube cambia los anuncios, prohíben usar TikTok en Montana y paga tu recibo de Totalplay usando Bitcoins.

MP3


Puedes  SUSCRIBIRTE AQUÍ.

Noticias:
-Paga Totalplay con Bitcoin
-YouTube cambia sus anuncios
-Gmail cerrará cuentas inactivas 
-Sube videos de 2 horas a Twitter
-Prohiben TikTok en Montana

Análisis: La legislación que no entiende de tecnología

Puedes apoyar a Noticias de Tecnología Express directamente en este enlace.
Gracias a todos los que nos apoyan. Sin ustedes, nada de esto sería posible.
Muchas gracias a Dan Lueders por la música.

Contáctanos escribiendo a feedback@dailytechnewsshow.com

Show Notes
Para leer las notas del episodio en una ventana aparte, ¡haz click aquí!

TikTok Table Talk – DTNS 4524

The Wall Street Journal reports that Apple restricted some of its employees from using generative models at work over concerns about leaks. Barclay’s financial strategists highlighted streaming services were a new source of pressure on the Japanese Yen. Grub Street’s Ezra Marcus has a story called “Cheesier, Saucier, and Drowning in Caviar How TikTok took over the menu.” Is TikTok changing the way restaurants approach presentation?

Starring Tom Merritt, Robb Dunewood, Len Peralta, Roger Chang, Joe, Amos

MP3 Download


Using a Screen Reader? Click here

Download the (VIDEO VERSION), here.

Follow us on Twitter Instgram YouTube and Twitch

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

A special thanks to all our supporters–without you, none of this would be possible.

If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you!

Become a Patron!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit

Send to email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here!


About Section 230 (May 2023 Update)

KALM-150x150"

We update the history of Section 230 in light of the recent Supreme Court decisions. What it is, what it isn’t and how those decisions affected or didn’t affect the future of the “safe harbor” law in the US.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to feedback@dailytechnewsshow.com

Episode transcript:

The US Supreme court has decided two cases that challenged protections of Section 230 of the US Communications Decency Act and in both cases the court decided not to touch those protections. In oral arguments for the cases the court indicated they felt maybe Congress should be the one to do that.
Twitter v. Taamneh argued that Twitter provided unlawful material support for failing to remove users from its platform. Gonzalez v. Google claimed that a platform, in this case, YouTube, should be liable for content it recommended to users.
A lot of people misunderstand what Section 230 does and doesn’t do. So in this updated episode, I’ll cover the basics of what it is and what it isn’t and what the court did and did not say in these landmark cases.

We covered the history and meaning of Section 230 in depth in the episode About Safe Harbor in July 2020. So if you want the deep dive please listen to that.
This episode will focus on how to properly explain and think about Section 230 no matter what argument you may or may not be trying to make. You may think Section 230 promotes censorship. You may think it protects big tech companies from responsibility. You may think it should be repealed. Those are all reasonable positions to take. But I often hear people argue these sorts of positions from a starting point that is wrong. I just want to give you the correct starting point from which you can make your argument.
So let’s start with the folks who say we should just get rid of it. There is a misconception that if we get rid of Section 230 companies would have to take responsibility for the content on their platform or that they would have to stop censoring. Neither one of those things is assured.
Without Section 230, ANY platform. And it’s worth pointing out this applies to a forum you might run on your own website, as well as to Facebook. Without Section 230, any platform would be seen in the eyes of the law as either a publisher of information or a distributor. A publisher is responsible for what it publishes. A distributor is not responsible for the contents of what it distributes.
The easiest way to think about this is a brick and mortar bookstore. The publisher of the books and magazines it sells are responsible for what’s in the books and magazines. The book store is just the distributor. In fact a 1959 Supreme Court case ruled that a bookstore owner cannot be reasonably expected to know the content of every book it sells. They should only be liable if they know or should have known that selling something was specifically illegal. Otherwise the publisher is liable for what’s in the book or magazine.
Now let’s think about that for a minute. The bookstore can decide what magazines to carry. But it’s not deciding what’s in the magazine. It isn’t allowed to sell magazines that it knows are illegal but it’s not expected to read every word of every magazine to police its content.
On the other hand, letters to the editor published in the magazines are in fact the responsibility of the publisher. Just because a reader wrote the letter doesn’t mean the publisher had to print it. It CHOSE to print it. It exercised editorial control, and therefore is liable for what the reader wrote.
The publisher of the content is not protected from liability. But the bookstore gets protection because it’s not exercising editorial control of what’s in the books. It’s a distributor.
Fast forward to the 1990s. Compuserve and Prodigy are vibrant new parts of the internet where people are talking to each other like never before.
It’s April 1990. Sinead O’Connor’s new song Nothing Compares 2 U (written by Prince) tops the Billboard charts.
Robert Blanchard has developed Skuttlebut, a database for TV news and radio gossip. It’s a new competitor for a similar service called Rumorville, published over on Compuserve’s Journalism forum. Skuttlebut and Rumorville are in stiff competition for the burgeoning online audience that wants TV and radio news industry gossip. This is FIVE YEARS before the Drudge Report mind you.
In the heat of the competition Rumorville posts that Skuttlebutt has been getting info from a back door at Rumorville, that Skuttlebutt’s owner, Robert Blanchard got “bounced” by WABC, And described Skuttlebutt as a “scam.”
So Skuttlebutt’s owner Cubby, sued Rumorville’s parent company, but also sued Compuserve as the publisher. But here’s the thing. Compuserve did not review Rumorville’s content. Once it was uploaded, it was available. Compuserve also didn’t get any money from Rumorville. The only money it made was off the subscribers to Compuserve itself, whether they read Rumorville or not.
In Cubby Inc. v Compuserve, the judge ruled that Compuserve was not a publisher. It was a distributor. It could not reasonably know what was in the thousands of publications it carried on its service. Therefore, like a bookstore, Compuserve was not liable for what was published in Rumorville.
Reminder. This is without Section 230. The platform was not exercising control over the content so it was not liable for what was in it.
On to October 1994. Boyz II Men is dominating the charts with a long run at number one with “I’ll Make Love to You.”
Prodigy’s Money Talk message board is still awash in talk about the bond market crisis. And an anonymous user posted that securities investment firm Stratton Oakmont had committed crime and fraud related to a stock IPO. Stratton Oakmont takes exception to what it considers defamation and files a lawsuit against Prodigy alleging the company is the publisher of the information.
So you’d think, given the Compuserve case that Prodigy is in good shape. It didn’t publish the comments the commenter did.
Except. It’s been a few years, and a few raging internet flame wars later, and Prodigy, like many other platforms, has developed some Content Guidelines for users to follow. It also has Board Leaders who are charged with enforcing those guidelines. And Prodigy even uses some automated software to screen for offensive language. This is all good community moderation practice right? Clear set of guidelines. Consequences if you violate them. And even some automated ways to keep some of the bad stuff from ever even showing up.
The court looked at that and said, well, looks to us like you’re exercising editorial control. You’re deciding who gets to post what. That feels a lot more like the letters to the editor than it does the bookstore. The court wrote “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.”
In Stratton Oakmont v. Prodigy, the court ruled in favor of Stratton Oakmont.
After that case the law stands that courts will give you the protection of a distributor, as long as you don’t moderate. If you moderate the content, you’re on the hook for it.
So in other words before Section 230, you could either leave everything up or you’d have to be responsible for everything, meaning you’d have to pre-screen all posts. Your choice is either zero moderation or prior restraint.
Republican Chris Cox and Democrat Ron Wyden both thought this was not an ideal situation. So they wrote Section 230 of the Communications Decency Act which read “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Those are the 26 words usually cited as section 230. But that’s just paragraph 1 of subsection c. But there’s a second subparagraph of section c which is also important. It’s called Civil liability It reads:
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
In other words, even if it’s protected free speech, the platform can take down content it finds objectionable and not lose its protections from liability for other content.
All of this is a long way to say if the platform didn’t create the content, it’s not responsible for it. ..with a few exceptions.
This is another part of the discussion of Section 230 that gets left out. Section 230 specifically says that this law will have no effect on criminal law, intellectual property law, communications privacy law or sex trafficking law. So the DMCA for example still has to be followed. You have to respond to copyright takedown notices.
So back to the two Supreme Court cases Twitter v. Taamneh and Gonzalez v. Google.
We have to remember that platforms are still responsible for content THEY generate.
If Facebook’s own staff post on Facebook defaming you, Section 230 does not protect it. Section 230 only means Facebook is not on the hook for what I post.
So what about recommendations? What about the stuff in my feed that Facebook chose to show me without my input? Facebook didn’t create the content but it chose to show it to me, specifically not to everyone. That would certainly count as editorial control before Section 230, but Section 230 was put in place specifically to allow a measure of editorial control– removal of posts– without having to take responsibility for all posts.
Also remember that “terrorist” content qualifies as criminal content which Section 230 does not protect. So how long can criminal content be up before a platform “should” have known about it and taken it down? Specific to the case of Taamneh vs. Twitter, is Twitter “aiding abetting” terrorists when it failed to remove such content?
Bearing on both the question of algorithms and criminal content is one more case that tested Section 230 shortly after it became law.
It’s April 25, 1995. Montell Jordan’s “This is How We Do It” tops the charts.
And someone has posted a message on an AOL Bulletin Board called “Naughty Oklahoma T-Shirts” describing the sale of shirts featuring offensive and tasteless slogans related to the Oklahoma City bombings which had happened 6 days before. The posting listed the phone number of Kenneth Zeran in Seattle, Washington who had no knowledge of the posting. He then received a high volume of calls, mostly angry about the post. Some calls were death threats. Zeran called AOL which said they would remove the post. However the next day a new post was made and new posts were made over the next four days. One of the posts was picked up by a radio announcer at KRXO in Oklahoma City who encouraged listeners to call the number. Zeran required police protection and sued KRXO and then separately AOL.
In its decision, the United States Court of Appeals for the Fourth Circuit wrote “It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect.”
It also wrote that Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service. Thus, lawsuits seeking to hold a service provider liable for its
exercise of a publisher’s traditional editorial functions — such as
deciding whether to publish, withdraw, postpone or alter content —
are barred.”
Zeran argued that even if AOL wasn’t a publisher, it was a Distributor and under the 1959 case, a distributor would still need to be responsible for speech it knew was defamatory. And Zeran argued AOL knew, because he called them about it after the first post. The judge however says that AOL is a publisher not a distributor plain and simple. But Section 230 shields them from the liability normally afforded a publisher. So you can’t just redefine them.
This ended up as a stricter protection for a distributor than the 1959 case. Instead of having to take it down once you know about it. Internet services were given a broader shield.
And that became the principle justification for CDA 230.
And if the Supreme Court follows that precedent it might also consider recommendations to be publishing behavior and therefore protected.
However that’s not what happened. Instead the court seems to think that algorithmic recommendations are new enough that Section 230 doesn’t properly apply to them.
During oral arguments for Gonzalez v. Google on February 22, 2023, multiple Justices indicated they thought Congress should rule on whether algorithmic recommendations should be considered to cause liability or not.
Justice Elana Kagan said “This was a pre-algorithm statute, and everyone is trying their best to figure out how this statute applies. Every time anyone looks at anything on the internet, there is an algorithm involved.”
Justice Ketanji Brown Jackson said, “To the extent that the question today is can we be sued for making recommendations, that’s just not something the statute was directed to.”
And Justice Bret Kavanaugh said “Isn’t it better to keep it the way it is, for us, and to put the burden on Congress to change that, and they can consider the implications and make these predictive judgments?”
Then on May 18, 2023, the court issued its decision in both cases. Both unanimous.
In Twitter vs. Taamneh, the court dismissed the allegations that Twitter violated the US Antiterrorism Act by failing to remove posts before a deadly attack. Justice Clarence Thomas wrote the opinion for the unanimous decision, saying that Twitter’s failure to police content was not an “affirmative act.
And he expressed concern that if aiding-and-abetting liability is taken too far merchants could become liable for misuse of their goods. He pointed out that email service providers should not be held liable for the contents of email. In fact he explicitly compared Twitter to email and cell phone providers who aren’t culpable for their users behavior. A cell phone service provider is not culpable for the illegal drug deals made over their phones.
Specifically regarding Twitter he wrote “There are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants’ relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm’s length, passive, and largely indifferent.”
And he even touched on the main issue from the other case, algorithmic recommendations. He wrote, “the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”
That all meant they could essentially dodge the entire issue in Gonzalez vs. Google, which had rested more on YouTube being liable for its recommendations.
In an unsigned opinion the court wrote that the “liability claims are materially identical to those at issue in Twitter…” And “Since we hold that the complaint in that case fails to state a claim for aiding and abetting … it appears to follow that the complaint here likewise fails to state such a claim.” And “we therefore decline to address the application of section 230.” So the claims in Gonzalez were also dismissed.
In essence these opinions are saying that if algorithms are not specific to a kind of content, then it doesn’t make recommending an “affirmative act.” And if you want to change that then Congress needs to pass a new law.
These two decisions left Section 230 unchanged.
In the end what I want folks to take away is that Section 230 doesn’t free a tech platform to do whatever it wants. It frees a platform to choose to moderate and exercise editorial control over the posts of others without having to assume responsibility for the thousands, and now millions of posts made every day.
It’s reasonable to argue that perhaps there are some responsibilities that should be restored to tech platforms through legislation. I think it’s worth pointing out that repealing Section 230 altogether would not necessarily achieve that.
So I hope now you have a firmer basis upon which to base your opinion whatever it is. In other words, I hope you know a little more about section 230.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos in conjunction with Will Sattelberg and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.

Samsung Sticking With Google Search – DTH

DTH-6-150x150Samsung reportedly suspended its review of switching to Bing, Apple plans to mass produce its own microLED displays, and Shein makes inroads back to India.

MP3

Please SUBSCRIBE HERE.

You can get an ad-free feed of Daily Tech Headlines for $3 a month here.

A special thanks to all our supporters–without you, none of this would be possible.

Big thanks to Dan Lueders for the theme music.

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods, KAPT_Kipper, and PJReese on the subreddit

Send us email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here.

VR Right Under Our Noses – DTNS 4523

Smell-o-vision makes a return, this time in VR? And the SCOTUS issues its ruling on two cases that could affect Section 230. We go over the rulings and explain what happened. And we discuss the startup Sightful that’s announced the Spacetop device, described as an “augmented reality laptop” which uses AR glasses in place of a real screen.

Starring Sarah Lane, Rich Stroffolino, Dr. Nicole Ackermans, Roger Chang, Joe

MP3 Download


Using a Screen Reader? Click here

Download the VIDEO VERSION.

Follow us on Twitter Instgram YouTube and Twitch

Please SUBSCRIBE HERE.

Subscribe through Apple Podcasts.

A special thanks to all our supporters–without you, none of this would be possible.

If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you!

Become a Patron!

Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit

Send to email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here!


US Supreme Court Doesn’t Touch Section 230 – DTH

DTH-6-150x150The court dodges commenting on Section 230 in two cases, YouTube will launch 30-second unstoppable ads, and chipmakers flee China for Japan.

MP3

Please SUBSCRIBE HERE.

You can get an ad-free feed of Daily Tech Headlines for $3 a month here.

A special thanks to all our supporters–without you, none of this would be possible.

Big thanks to Dan Lueders for the theme music.

Big thanks to Mustafa A. from thepolarcat.com for the logo!

Thanks to our mods, KAPT_Kipper, and PJReese on the subreddit

Send us email to feedback@dailytechnewsshow.com

Show Notes
To read the show notes in a separate page click here.

About the DMCA (Updated)

KALM-150x150"

By 1998 the US had passed its Digital Millennium Copyright Act. And partly because the US generates so much copyrightable material, and partly just because it’s the US and is a little pushy on the world stage, the DMCA became the de facto way of handling copyright protections on the internet around the world.

But what is it? Why did we need the DMCA or the WIPO copyright treaty at all?

Let’s help you Know a Little more about the DMCA.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to feedback@dailytechnewsshow.com

Episode transcript:

It’s April 26, 1970. Joe Cocker is playing live at the Fillmore. The Jackson 5’s ABC is dominating the charts In Novo Mesto, Slovenia, little Melanija Knavs is born. And after three years of planning, the World Intellectual Property Organization has begun operations. The purpose of the specialized agency is to provide a place for countries to work together on their various intellectual property laws and rules. Copyright is of course the most well known type of intellectual property these days but it also includes trademarks and patents and such. WIPO is meant to be a clearing house. A place to try to harmonize. I’ll respect your patents if you respect mine etc. In fact its first big achievement is the Patent Cooperation Treaty which to oversimplify, made filing a patent in one country equivalent to filing in all. Now different countries still had latitude to approve or deny patents according to their own laws, but it made things a lot simpler.
WIPO made lots of other treaties and systems to make it easier to handle trademarks and service marks. It created mediation and arbitration to help resolve disputes between countries over these kinds of matters.
And in September 1995 it took up the digital agenda. Copyright came to the fore. And somehow. Some way, WIPO agreed on new rules faster than it almost ever agreed on anything. By December 1996 there was a diplomatic conference to approve the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty.
Those two treaties brought countries together to agree on how to handle digital copyright protection. Each country then had to pass its own law to implement the treaty.
By 1998 the US had passed its Digital Millennium Copyright Act. And partly because the US generates so much copyrightable material, and partly just because it’s the US and is a little pushy on the world stage, the DMCA became the de facto way of handling copyright protections on the internet around the world.
But what is it? Why did we need the DMCA or the WIPO copyright treaty at all?
Let’s help you Know a Little more about the DMCA.

Since the Internet became more than just something university IT experts used, worries about copyright violations on the internet have existed.
Digital content is infinitely copyable and the internet makes it infinitely transferable. That’s a nightmare for businesses built on physical limitations to copying, like music, movies and others.
To extend these older business models onto the internet, companies use digital rights management or DRM. This is a name for varying ways of trying to lock up content so that only a user who is authorized to view it can. It’s an attempt to make it not be infinitely copyable. DRM is tricky though because you have to balance access for the person who does have the right– like a paying customer– with denying access to anyone who doesn’t. Those are cross-purposes. If you leave a door open for authorized viewers, eventually unauthorized viewers will figure a way into it.
So the industry quickly turned to the law, and we get the Digital Millennium Copyright Act. Or DMCA. While this is only a law in the US, it affects anyone who publishes content in the US, such as on YouTube and has provided a model for laws like it around the world.
The problem it solves is that no matter what digital locks you put on a file, someone can figure out a way to break them. So the law fixes this by making it illegal to break them.
That’s one of the main misunderstandings about the DMCA. It doesn’t just make unauthorized access illegal. That was already illegal under copyright law. It makes circumventing access protections illegal, punishable by fines and imprisonment.
Copyright holders can seek up to $2,500 per violation, or statutory damages up to $25,000. Repeat offenders can face more. If you are accused of willfully violating the DMCA for personal or commercial financial gain, you can be tried as a criminal offender. A first-time criminal DMCA violator can face a fine of up to $500,000, up to five years in jail, or both. Repeat offenders can be fined up to a million dollars and up to ten years in prison.
Screen capturing is a circumvention of the DMCA in some cases. Keep that in mind.
The DMCA was passed as an amendment to the US Copyright Act in 1998. It implemented those two 1996 treaties of the World Intellectual Property Organization.
It makes it illegal to produce or disseminate (even if you give them away free) any device or service INTENDED to circumvent measures that control access to copyrighted works. Courts decide whether a device or service is intended to do this. Because you know, computers can do this, but it’s not their sole intention. And that’s why screen-capturing software is not just illegal.
The other aspect of the DMCA is it makes it illegal to circumvent access control EVEN IF copyright is not infringed. Yep. If you have a fair use for something, like making a backup of a DVD, it is illegal under the DMCA to circumvent copyright protection in order to make fair use of that backup. The DMCA includes some limited exemptions such as for security research and government research but they are few.
Now if you’re saying hold on I thought they changed that and made DVD copying legal. We’ll get to that later but yes and no.
There are a couple more aspects of this to keep in mind. One, is that the United States Copyright Office ( part of the Library of Congress) was given the power to create (and get rid of) further exemptions to the DMCA. So it can restore fair uses on a case by case basis. More on that later.
And then there’s a safe harbor for platforms. Online Service Providers, which includes platforms like YouTube and Facebook– are exempt from liability for their users copyright infringement as long as they follow certain procedures. Platforms keep their safe harbor by promptly blocking access to infringing material once they are notified of an infringement claim. This called the “notice and takedown” process. It also provides for a counter notification from a user who claims the material is not infringing.
There’s also an exemption for a repair person who makes limited copies solely for the purpose of repairing a machine. In other words, imaging a drive to restore it on a replacement drive doesn’t violate the DMCA. There are also some provisions for distance education, ephemeral copies made in the process of broadcasting and more.
DMCA’s Title V is my favorite. Title V provides protection for Boat Hull designs because Boat Hull designs are not covered by copyright as they cannot be separated from their useful function and therefore are better protected by patents than copyright. This section of the DMCA was added in 1998 after the Supreme Court ruled — in Bonito Boats, Inc. v. Thunder Craft Boats, Inc. –that Boat Hulls did not have copyright protection. So, immediately boat manufacturers lobbied congress to add the exemption to the DMCA. As of 2019 there have been 538 applications for registrations for Boat Hull designs under the DMCA, compared to more than 70,000 patents granted.
OK back to the notice and takedown system.
The notice and takedown system is governed by Section 512 of the DMCA.
In order to get the safe harbor protection, a service provider has to have an agent on file who takes notifications. The provider can’t have reasonably known about the infringing activity or directly benefit financially from infringing activity. In other words your main business can’t be infringement.
Ok. Now you’re a safe harbor protected platform. How does it work if somebody thinks their copyright has been infringed on your platform?
Well it works differently for every system but here are the parts required by Section 512.
The notifier must send a formal takedown request notification under penalty of perjury. They can’t knowingly lie about it.
Once a notice is received, the provider must “expeditiously take down or block access to the material.” right away. No grace period. It must also promptly notify the user that the content has been removed or disabled.
The user can then file a counter-notification, also under penalty of perjury, that its content was identified as infringing through a mistake or misidentification.
That sends it back to the notifier. If they do not file a court order against the user, the provider must restore the content within 10-14 days.
So yes. Send a takedown notice the content goes down immediately. Send a counter notice it takes 10-14 days to get it back up.
So you could abuse the system by just sending notices for anything you wanted to disappear from the internet for a couple of weeks right?
Well, those perjury conditions are meant to keep the system from being abused but in practice they’re hard to prove. Just being mistaken is not the same as perjury so you have to prove that a company KNEW the content was not infringing when it sent the notice, not just that it was mistaken. And end users are much more likely not to want to risk a perjury lawsuit than the large companies who send bulk notices, so most takedown notices are successful. But willful and malicious abuses are rare. Mistakes however, are rampant. Lots of companies have been accused of sending inaccurate bulk takedown notices, sometimes ending up affecting their own employees. But that’s not the same as perjury.
Also there is a chilling effect of the DMCA. A content-hosting platform can avoid falling afoul of the DMCA by just not hosting some material altogether. It’s not required to host it. So some companies, like YouTube have employed “informal” takedown notices that are not meant to be the legally required notices. These are usually constructed as terms of service violations. This lets them take down content without risking the perjury charge. Companies have the right to operate outside the DMCA in this way because the law can’t force them to host content they don’t want to. A copyright holder is only subject to perjury restrictions if they are following a “formal” takedown procedure. YouTube does have a method of proceeding from informal takedowns to formal ones.
For years YouTube used a bot system called ContentID to look for possibly infringing content. If the bot thought it saw a match to a database of content provided by big copyright holders, it would pull the content off the site and notify the user it had been pulled. This was not part of the DMCA.
If the user disputed the Content ID claim, YouTube would then contact the alleged rights holder. The rightsholder could release the claim and the content would go back up or could uphold the claim and the user would be notified that the rights holder still claimed the content was infringing and it would stay down. This was partly DMCA as this could also serve as the rightsholder’s formal Takedown notice. But since the bot had identified the content as infringing the risk of perjury for the rights holder was almost nothing.
If the user did not have an account in good standing or had already appealed three other claims that was it. The DMCA never entered into it for the user. YouTube just declined to host the content because they didn’t want to.
However if the user was in good standing and had not reached the appeal limit a DMCA counterclaim would then be issued to the rightsholder — with the risk of perjury for the user still there– and the normal DMCA takedown procedure would take place. The rightsholder would then have to decide whether to pursue it in court or not.
As I mentioned earlier the US Copyright Office can make exemptions to the DMCA. It regularly reviews exemptions and can add, extend or remove them.
The Copyright Office has issued exemptions to the DMCA over the years. Here’s a look at a few of them.
The first two in 2000 were for website filtering — you know like safe sites for kids kind of stuff– and preservation of damaged or obsolete software and databases.
In 2003 an exemption was given to screen readers for e-books and one for video games distributed in obsolete formats.
A brief exemption was given in 2006 for sound recordings protected by software with security flaws, specifically the Sony Rootkit. And one for unlocking wireless phones.
In 2010 an exemption for breaking DVD’s Content Scrambling System was issued for educational, documentary, noncommercial or preservation uses. Security testing of video games.
In 2012 an exemption for excerpting short portions of movies for criticism or comment was given.
In 2018 one for 3D printers if the sole purpose is to use alternate feedstock. As well as ones to expand exemptions for preservation and security research.
In October 2021, an exemption was given for repairing any consumer device that relies on software as well as medical devices and land sea and air vehicles even if they aren’t consumer-focused.
What if you’re outside the US? Why should you care? On the one hand, you’re right, US law doesn’t apply outside the US. However copyright owners from outside the US can still issue takedown notices on US sites. But the bigger thing to remember is that the DMCA is the US implementation of the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty. The WIPO Copyright treaty was signed by 110 countries and most members of the World Intellectual Property Organization or WIPO have agreed to accept DMCA takedown notices. Think of it like this. A country adopted the WIPO treaties, the US created a system to enforce it and the country just borrows that system. It’s not that US law is enforceable in their country it’s that the US enforcement system of the WIPO treaty is a nice prepackaged way to do things. Copyright-enforcement as a service!
Some countries however are known as DMCA ignored countries. These are countries who either have not agreed to WIPO’s provisions, systematically ignore those provisions or prioritize their own copyright laws over those of the US and so websites do not honor DMCA requests.
These include Russia, Bulgaria. Luxembourg, the Netherlands, Hong Kong, Singapore, Malaysia, Switzerland, and Moldova. They are often promoted as places to host websites if you’re concerned about copyright infringement, though each carries its own set of concerns either with local laws or political speech. China doesn’t necessarily honor the DMCA, but has enough other restrictions that it’s not generally not included on these lists.
Nobody loves the DMCA but it has proved to be surprisingly stable. It’s next big test will be machine generated works like ChatGPT and the multiple text-to-image generators.
So far the discussions have been about where copyright applies but that is going to drift into the DMCA and put its uneasy equilibrium to the test.
For example, in April 2023 an unknown composer created a song and used some machine generation to make it sound like Drake and the Weeknd. The song lyrics and beats were original but the artist had used a producer tag that was not. Universal Music Group used that producer tag as the basis for copyright takedowns. But versions without the tag would force the issue.
That’s the first not the last of what will be a long discussion about where machine-generated works fall in copyright. How that discussion plays out will likely determine whether the DMCA stays standing, gets modified or rewritten altogether.
So that is the Digital Millennium Copyright Act, aka the DMCA.
It makes it illegal to circumvent copyright protection unless there is an exemption written in the act itself, or added by the US Copyright Office.
It also provides a way to try to get infringing material removed and a way for a user to combat having that material removed.
I hope this helps you understand why some content is allowed up and some is not and why you don’t see some content at all.
In other words, I hope you know a little more about the DMCA.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos in conjunction with Will Sattelberg and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.