A few days ago, ABC News ran an "investigative” piece called "Group Probes Ease and Danger of Buying Steroids Online." ABC describes the "group" at issue as "an online watchdog," the Digital Citizens Alliance. That group determined that some of the millions of available YouTube videos encourage steroid use and that YouTube (which is owned by Google) places ads next to steroid-related videos and search results. They argue that Google and YouTube should be held legally responsible for any illegal content linked or posted.
ABC News could have told the story differently: A Microsoft-backed group led by a public relations firm (but named for an "alliance" of "citizens") is holding Google & YouTube to a standard that Microsoft fails, while effectively arguing for filtering of the Internet, through appeals to the emotional issue of teenage steroid use.
Let's begin with the big picture and move to the details of this group.
Filtering the Internet is a terrible idea, even to stop illegal drug sales.
It is awful that teenagers turn to any illegal drugs. But perspective is needed. We know some teenagers buy drugs at school; we don't shut down schools, we don't search every student, we don't monitor everything they say, we don't require them to get permission from an adult before speaking with one another. We engage in education efforts and responsive actions. We also know that people will use the Internet to communicate about everything from coordinating a democratic revolution and reporting government corruption to idle chit chat to illegal activity.
The Digital Citizens Alliance is actually arguing for a filtered Internet. DCA claims that companies should be liable for any illegal content shared on a site. If Twitter, Google, Facebook, Yahoo, and others were guilty of the acts of all the slanderers, copyright infringers, fraudsters, conspirators, and drug pushers on their sites, then they would have to filter all the content on their sites. With a billion users, if even 0.1% of them are wrong-doers, then a platform would be liable for one million wrong-doers. They wouldn't be able to take on the risk of legal action for all those potential wrong-doers. That means these companies would have to filter content in advance. The Digital Citizens Alliance cannot mean that companies simply have to act quickly and take down illegal content once notified; these companies all take down content when it is reported or flagged for violating their terms of service forbidding illegal activity.
The existing rules strike the right balance. For the past 2 decades we have had a set of rules to ensure freedom of expression online while limiting illegal activity. Those rules generally enable companies like Twitter, Facebook, Google, and the New York Times online to carry the speech of millions or billions of people empowering all of us to publish and comment — through tweets, posts, pages and videos, or comments on stories. They are able to carry the speech of so many people because they are not guilty for all illegal content posted by every single person. (The laws include the celebrated 230 of the Communications Decency Act and also 512 of the Digital Millennium Copyright Act.) Instead of these companies being liable, the actual wrong-doers are responsible: the slanderers, the sites that traffic in drugs, etc. Recently, the authorities busted an online drug bazaar and a child prostitution ring without having to change the Internet's magna carta and make tech platforms liable for all the content on their sites. If they were liable, these companies simply would not be able to act as platforms and networks for billions of people. They would have to filter all content in advance and become editors of their platforms, closing opportunities for average speakers.
Companies like Google make huge efforts to remove illegal content. Most platforms for the speech of billions of users have to rely in part on users flagging or reporting content. It's far more effective and respectful of free expression than attempts to filter through computer algorithms. Go to Twitter: you can "report" every tweet. Check YouTube: every single video has a flag icon. Every piece of content on Facebook can be reported. Considering the number of users and content shared, this flagging is essential. I wrote about this in some detail here. More briefly: one-hundred hours of video are uploaded to YouTube every single minute and that much content can't be filtered in advance without requiring YouTube to limit who can post. Googles search engine includes trillions of sites and reflects the web; Google can't filter them all and shouldn't have to. In one month alone, however, Google processed over 18 million requests to remove URLs from its search results based on copyright concerns and removed removed 97% of the requested URLs from July 2011 to December 2011. Google also makes efforts to ensure ads are not placed alongside illegal content. (I provide the sources in the other post.)
The Digital Citizens Alliance is a Microsoft-backed group, which is the only reason Microsoft is not their target.
This is an old story. The story is Microsoft's ongoing strategy of attacking Google in slanted advertisements and through political PR efforts. It's also the story, it seems, of the copyright industry, which has long argued, in various ways, for pre-filtering all content, including when it attempted to push an infamous censorship bill called SOPA.
DCA is backed by Microsoft and not a citizens alliance.
The Digital Citizens Alliance is not an actual alliance of citizens, but instead is known to be backed by Microsoft. Techdirt called DCA an obvious "astroturf" group not a real "grassroots" group. Two of DCA's three staff members are employees of the DC public relations firm, 463 Communications (Tom Galvin and Dan Palumbo), and the other is also in PR. That is not the makeup of, say, the ACLU, EFF, or Consumers Union, or a legitimate consumer group. The alliance's advisory board includes someone from the Alliance for Competitive Technology, an organization that receives over a million dollars from Microsoft every year. I live in DC and know folks at 463, ACT, and Microsoft — in fact I even like all of them I know. It's just that it's obvious to me and anyone in DC: an organization with this backing and structure is not an online watchdog or an advocacy group but a corporate PR vehicle.
This close connection with Microsoft explains why DCA has not attacked Microsoft for the same exact things. In fact, if you do a Microsoft Bing search for "buy steroids," you will see that ads accompany the results, but you will not for the same search on Google.
It's understandable why something might fall through the cracks on Bing: the Internet is a big place with trillions of sites and billions of real human users who do things that are sometimes unsavory. It is impossible to police them all in advance and requiring them to do so would undermine free expression and change the nature of the Internet. The Digital Citzens Alliance should let Bing know about this issue. But that's clearly not the intent of the alliance. It's not around to actually make the Internet a safer place, just to be part of a PR attack on a specific company.
Disclosure: I advise several companies, including Google, on free expression law and public policy.
Written by Marvin Ammori, Fellow at the New America Foundation, Lawyer at The Ammori Group
Follow CircleID on Twitter
CircleID: In Which We Consider the Meaning of 'Authorized': GIVAUDAN FRAGRANCES CORPORATION v. Krivda
"When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean — neither more nor less."
"The question is," said Alice, "whether you can make words mean so many different things."
"The question is," said Humpty Dumpty, "which is to be master — that's all."
—Lewis Carroll, Through the Looking Glass
What does authorized access mean? If an employee with authorized access to a computer system goes into that system, downloads company secrets, and hands that information over to the company's competitor, did that alleged misappropriation of company information constitute unauthorized access?
This is no small question. If the access is unauthorized, the employee potentially violated the Computer Fraud and Abuse Act (CFAA) (the CFAA contains both criminal and civil causes of action). But courts get uncomfortable here. They are uncomfortable when contractual disputes morph into criminal violations. If, for example, a site's Terms of Service says that I must use my real name, and I use a pseudonym, is my access unauthorized? We have seen over-zealous prosecutors attempt to transform a non-compliance with a TOS into a criminal act. Courts don't like it.
But not all the court's agree; there is a split between the Circuit Courts that believe such actions by an employee constitute a criminal violation of the CFAA — and those courts that believe that the matter is best handled as a breach of contract between employer and employee.
Today's court decision comes from the District Court in New Jersey (which is in the 3rd Circuit): GIVAUDAN FRAGRANCES CORPORATION v. Krivda, Dist. Court, D. New Jersey Sept. 26, 2013. The facts of this case are as might be expected:
In early May, 2008, Krivda resigned his employment with Plaintiff, Givaudan Fragrances ("Givaudan") where he was a perfumer. Prior to his last day on the job, Krivda allegedly downloaded and copied a number of formulas for fragrances. The parties acknowledge the formulas as trade secrets. Soon thereafter, Krivda commenced employment as a perfumer with Mane USA (Mane), a Givaudan competitor. Givaudan alleges that Krivda gave the formulas to Mane — an act of misappropriation.
Plaintiff Givaudan sued. Before the court is Defendant Krivda's Motion to Dismiss the CFAA cause of action. Defendant argued that since his alleged access of Plaintiff's computers while employed was authorized, it could not constitute unauthorized access pursuant to the CFAA.
The New Jersey District Court looked to the 9th Circuit (the West Coast) as one of the lead Circuits that has considered this issue.
Generally, the Computer Fraud and Abuse Act § 1030(a)(4), prohibits the unauthorized access to information rather than unauthorized use of such information. The Ninth Circuit has explained that "a person who 'intentionally accesses a computer without authorization' . . . accesses a computer without any permission at all, while a person who 'exceeds authorized access' . . . has permission to access the computer, but accesses information on the computer that the person is not entitled to access." The inquiry depends not on the employee's motivation for accessing the information, but rather whether the access to that information was authorized. While disloyal employee conduct might have a remedy in state law, the reach of the CFAA does not extend to instances where the employee was authorized to access the information he later utilized to the possible detriment of his former employer.
(Citations and other stuff omitted).
In the case at hand, the defendant employee had, at the time, authorization to access plaintiff's computers and to the specific information at issue. The access was therefore authorized under the CFAA, regardless of what defendant does with that access. Furthermore, the phrase in the CFAA about someone exceeding their authorization doesn't help plaintiff here; this refers to the situation where someone has authority to access one system, and then accesses another system. That is not the situation before the court. Plaintiff argues, "Well, defendant didn't have our authority to review and print the information." To which the court responds, such quibbling "does not fall within the definition of exceeds authorized access."
Defendant may have other trouble with Plaintiff, but Plaintiff's cause of action for a violation of the Computer Fraud and Abuse Act is disposed of.
Written by Robert Cannon, Cybertelecom
Follow CircleID on Twitter
In a very casual and low-key footnote over the weekend, ICANN announced it would be further bypassing the Affirmation of Commitments and ignoring the WHOIS Review Team Report. There will be no enhanced validation or verification of WHOIS because unidentified people citing unknown statistics have said it would be too expensive. Here is the exact quote sent to the Accountability and Transparency Review Team:
Regarding the WHOIS verification goals for the 2013 RAA, while it is true that ICANN initially sought more expansive WHOIS validation/verification requirements, questions were raised related to the costs associated with implementing them on a global basis.
As a topic which has burned untold hours of community debate and development, the vague minimalist statement dismisses every ounce of work put in by stakeholders. For an organization that loves studies, there is no study cited here which demonstrates how the process would be too expensive. And which process? Has ICANN ever requested proposals to develop a validation process? Without actual proposals to review how does ICANN determine it would be too expensive? We all know that WHOIS inaccuracy has been a bone of contention for over a decade now which lead to the AoC section stating:
But, now ICANN just decided not to do it.
One of the major outcomes of the AoC was the creation of the WHOIS Review Team to find a path for ICANN to tackle WHOIS. This cross-constituency working group issued a 92 page report which recommended WHOIS become a strategic priority for ICANN (but that would be too expensive). The review team said ICANN should reduce the number of inaccurate WHOIS records by 50% every year (too expensive). But, let me take a step back. ICANN doesn't actually say validation would be too expensive, they merely state that "questions were raised related to the costs." So questions raised by persons unknown is enough to thwart years of effort by the Internet community. Does anyone get to ask questions about the costs associated with bad WHOIS? Are the six phantom compliance employees ready to deal with this?
So, what does this get us? It gets records like the one for the illicit pharmacy site nobledrugstore[DOT]com which is completely BLANK:
Using WHOIS server whois.dattatec.com, port 43, to find nobledrugstore.com
Datttatec.com - Registration Service Provided By: Dattatec.com
Contact: +54 341 599000
Domain name: nobledrugstore.com
Creation Date: 2012-07-25
Expiration Date: 2016-01-23
Domain Name servers(es):
- ( zip: )
Phone : -
- ( zip: )
Phone : -
- ( zip: )
Phone : -
- ( zip: )
Phone : -
Written by Garth Bruen, Internet Fraud Analyst and Policy Developer
Follow CircleID on Twitter
Paul Mockapetris to Serve as Senior Security Advisor to ICANN's Generic Domains DivisionICANN has announced that Paul Mockapetris, inventor of the Domain Name System (DNS), has agreed to serve as Senior Security Advisor to the Generic Domains Division and its President, Akram Atallah.
"The Domain Name System has met the needs of the Internet for secure and reliable service and growth in size and function," said Mockapetris. "I'm looking forward to helping ICANN continue that tradition."
Mockapetris created the DNS in the 1980s while at the University of Southern California's Information Sciences Institute. He also has been an active member of the Internet Engineering Task Force since its inception, serving as its chairman from 1994-1996. Paul Mockapetris was also recently named chairman of ICANN's Strategy Panel on Identifier Technology Innovation.
Follow CircleID on Twitter
Word to the wise: Fadi Chehadé's ICANN isn't going to take criticism lying down!
In the past, the organisation has tended to react to criticism with a silence that was probably considered a way to avoid aggravating critics any further, but instead tended to infuriate people that were expecting answers.
No longer. Since Chehadé came in as CEO, they get answers! Chehadé has quite rightly infused his staff with a culture of pride in what ICANN does. A message he has often carried himself. Whilst remaining open to criticism, he will answer if and when he feels that criticism to be unfair or unjustified.
A recent letter by Verisign's Chuck Gomes (published here by Chuck on CircleID) clearly fits that bill. In a letter dated October 3, 2013 and made public today, ICANN's VP for Domain Name Services Cyrus Namazi, writing at Chehadé's request, has reacted strongly to Gomes' accusations that ICANN has not been behaving as it should.
"Your letter makes vague and unsupported accusations about ICANN not operating as a multi-stakeholder, accountable organization," writes Namazi. "It appears to rely exclusively on examples in which your company would have preferred a different result. It is not surprising that you would take positions in the letter that are consistent with the outcomes being sought by your company. But in the light of your personal involvement with ICANN over many years, I have to assume that your own views on these issues are at least more nuanced."
Whilst some statements in Namazi's letter come across as stern but well founded ("to the extent that Verisign is unhappy with the new gTLD registry agreement, it is free not to sign"), there is also a level of dishonesty in the responses. I mean if anyone, not just Verisign, is unhappy with an ICANN contract, it's not as if they can go somewhere else and get the same service. ICANN has a monopoly over gTLD contracting and therefore, a strong responsibility to making sure everyone in the community is comfortable with them. I would therefore suggest that "If you don't like it, shove it!" might not seem as appropriate a response as a more nuanced "these contracts have been discussed for years and at some point, we need to move on,"…
Namazi is also strong in his response to Verisign's security concerns. "Your accusation that ICANN is prioritizing the New gTLD Program over security is inaccurate and, frankly, reckless." Many in the community have voiced similar opinions of late, in response to Verisign's insistence that there are risks with the new gTLDs and these are being ignored.
The letter leaves me with mixed feelings. On the one hand, I appreciate the stronger stance ICANN is now taking against critics. On the other, I am ill at ease with what at times feels like unwarranted personal attacks. "We acknowledge the importance and value of your participation as a former Chair of the GNSO. We also understand that you write this letter as a representative of your company, Verisign," Namazi writes at the start of the letter, before ending with "I urge you to re-assume your role as a leader within the ICANN community."
So does that mean that because he served as GNSO Chair, Chuck should now refrain from calling it as he, or his employer Verisign, sees it? Surely that's like doing double time. You work hard to Chair a key ICANN group in a volunteer position, and then once out of there you must continue to tow the ICANN company line. Really?
If that's true, perhaps I shouldn't be writing any article that isn't 100% supportive of everything ICANN says or does…
However, I fully agree with Namazi's closing sentence: "it's time to lock arms, move on and tend to the real business at hand." That goes for everyone, ICANN critics and ICANN alike.
Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING
Follow CircleID on Twitter
Australia will be an interesting test market for VDSL. With a new government and the broadband infrastructure company NBN Co basically in agreement, it is most likely that VDSL will be used to bring for example fast broadband to multi-dwelling units (MDUs).
It was mainly for political reasons that the previous government stopped NBN Co deploying the VDSL technology in MDUs for this purpose. Whether or not any more VDSL will be deployed beyond that will largely depend on the NBN Co review of its current plan. If there indeed is not a cost blowout, and if the timeframe can in fact be maintained, there is a good chance for the majority of that plan to survive.
Obviously another issue that will need to be addressed here is customer expectation. Will people in MDUs accept that they will receive a VDSL connection rather than the FttH connection they were promised under the previous government?
VDSL an interim solution
The principal consideration of VDSL and its vectoring option continues to be that it is an interim broadband technology and will eventually have to be replaced by fibre-to-the-premises, FttP, an assessment also supported by the Australian Minister for Communications, Malcolm Turnbull.
But there are other more immediate issues that need to be considered in relation to a fibre-to-the-node FttN rollout based on VDSL.
In any economic and technical sense VDSL vectoring can only be done in Australia by its national telco Telstra, and it will therefore be a monopolistic activity that needs to be regulated. In any case it will not give the government its much-wanted infrastructure competition. It is rather puzzling that this government is hanging on to infrastructure-based utility competition, a policy they had already trialed in the 1990s with HFC and which failed miserably — it only provided 25% penetration, was (for 90%) overbuild, and led to financial write-offs of billions of dollars.
However, I argue that Instead the government should concentrate its policies on maximizing competition on top of that infrastructure.
But apart from these policy issues there are also other considerations in relation to the cabinets that need to be installed for a VDSL rollout.
The aesthetics of the cabinets
Estimates vary greatly on the size of the FttN rollout, but up to 70,000 VDSL street cabinets will have to be deployed for the FttN rollout as it was outlined in the Coalition's policy document of April 2013, when they were still in opposition.
The size of these cabinets has shrunk somewhat from that of a double-sized fridge, but it is still considerable — something the size of the operating boxes near traffic lights. Where these boxes have been deployed efforts have sometimes been made to make them more attractive, e.g., by planting flowers around them; but many more have proved to be ideal graffiti targets. As with everything, it also helps if the installation can be promoted as a positive development for the community, but in Australia, where the FttN is a backward step from the original promised FttP deployment, it could be difficult to put a positive spin on these cabinets.
Putting aesthetics aside, the more important question that may need to be asked is how reliable are the cabinets. My Dutch colleague, Hendrik Rood, brought the following to my attention…
Reliability and performance of the cabinets
In 2007 BTG, the Dutch association of MNE (enterprise users) questioned the company installing these cabinets, Dutch incumbent telco KPN, about availability/performance of the FttN cabinets within local loop architecture. KPN is rolling out FTTN infrastructure in some cities entirely by itself, while at the time of questioning it didn't own a share in Reggefiber, which was still a competitor who rolled out FTTH. That changed end of 2008, when KPN acquired 41% in Reggefiber.
Finally in September 2009 KPN, now active in both FTTN and FTTH came up with a presentation based on data collected by TNO (the national R&D company).
The key points of this presentation were:
- According to KPN the FTTN street cabinets contain two-hour battery backup. According to installing contractors it is, at best, one hour. This compares to eight hours of battery back-up for similar equipment in local exchange buildings and even diesel-back-up in large central offices.
- Reggefiber/KPN's FttH AreaPoPs (points of interconnect) batteries get four-hour back-up with a fast replacement service. In the Netherlands, where it is at most two hours (without traffic jams) to drive from central operations headquarters to every corner of the country except the islands; two hours is a fairly feasible window, but one hour is obviously not enough to avoid a shutdown during any serious power outage.
- With 2-hour battery backup and based on data for power outages (short- and long-lasting) for Greater-Amsterdam, it was calculated that unavailability due to power outages would be contained to 1 per 9 years (instead of 1 per 3.5 years for minor outages of >15 minutes without battery backup).
- However, for a business with a few hundred branches around the country this still averages out at about an outage per week. And it is also an issue if you run your alarm systems over it etc. All these issues aren't currently happening with ADSL (because central offices locations have ample back-up batteries and power consumption is declining in those buildings with less and less power-hungry electronics/switches).
Bottomline: while power outages in the Netherlands are much lower than in, say, Australia or the USA, a mean-time-between-failures (MTBF) of 9 years after battery-backup measures is still a considerable burden for a firm with 450 branch offices spread over the country. That's an average of one outage per week.
Of course you can argue that your PC wouldn't work etc., but Hendrik argues that one has to be aware that battery backup in homes/offices for critical communications installations (fire and security alarms etc) is becoming increasingly common and it is in relation to these functions that the 'lifeline' functionality is essential, and FttH is superior to FttC/FttN as was the old fashioned PSTN/ADSL powered in local exchange buildings.
Issues in relation to savings coming from sharing infrastructure
In the TNO review it also turned out — KPN was obliged to confess — that it would never route its base stations backhaul through street cabinets (not even with G.shdsl.bis or FTTC+fibre extension etc) but would connect passively to the (regional) central offices.
Most of the many thousands GSM/3G base stations of KPN are currently backhauled over either microwave or multiple G.Shdsl (those on rooftops and nearer to central offices) that run passively through the company's local loop to their central office location. Higher bandwidths for base stations (to support more mobile data traffic for 3G and 4G) will be done either by high bandwidth microwave or passive fibre loops, maybe fused through in the manhole near the VDSL2 street cabinets on the city or regional ring linking those cabinets.
The latter implies trenching a spur directly from the ring that also feeds to the VDSL2 street cabinets towards the mobile towers. This last-mile spur will be full cost — that is, it cannot be connected with fibres from the FttH local loop access networks. This inability to share cost is, of course, more expensive.
In the Netherlands, when an apartment building gets FttH there will be spare fibre loops deployed to enable the connection of a rooftop base station (that currently is typically supplied by G.Shdsl from central office over passive copper to the rooftop installation). The difference between a separate spur and a shared local loop with the FttH plant is between ca. €6000-€8000 per base station (separate spur) and €1000 (passive FttH loop).
So the added cost is, say, 10 thousand base stations x €7000 = €70million (10k is the estimated number of high-bandwidth base stations at rooftop positions needed for high-bandwidth 4G service).
Of course that won't be the 'deal breaker' for lower CAPEX with a VDSL-based access network, but it is one of the hidden costs in choosing VDSL. The other one is raised levels of power consumption, as VDSL2+vectoring is far more power hungry per line (including CPE and DSLAM-cards) than an FttH-based solution.
The risk of short-circuiting
Explosion in VDSL street cabinet of AT&T in US – Big hole blown into the nearby wooden fence (Source: Light Reading)The TNO review also looked into this issue. While it is an exceptional event it cannot be excluded. The energy density of Li-ion batteries is much higher than conventional (lead-sulphur) batteries used in central offices. As a result a serious short-circuiting in equipment can lead to an internal chemical reaction that causes an explosion — such an explosion happened in a VDSL street cabinet of AT&T in the USA. Lightreading collected some pictures including the big hole blown into the nearby wooden fence.
If you want to have small street cabinets and long battery lifetime (e.g., a few hours) you have to deploy dense Li-ion and the risk of a seriously damaging explosion increases.
Of course, short-circuiting isn't a frequent occurrence in a V-DSLAM, but it cannot be entirely ruled out for any equipment, and with a large installed base of street cabinets the chances grow of one or more short-circuits anywhere in the country over the technical lifespan of this new equipment deployed in street cabinets.
Until now telco operators haven't had any serious scale experience with the impact of battery-backed-up street cabinets, and neither has CATV (as they didn't use them because they hardly served SMEs and MNEs over their HFC-plant).
A number of mobile base stations also had relatively short-run UPSs (typically 30 minutes, which causes the mobile networks to shut down at large-scale power outages, despite people walking around with devices that last a day without charging). However this attitude is gradually changing at the operators (and they earn enough money with services to consider a serious battery back-up at the base station site that lasts many hours).
A hidden cost is that a typical UPS-supplier (the 110-230 VAC systems) that last 15-30 minutes (the time needed to power up the back-up diesel generator) charges an annual maintenance fee of ca. US$3000-5000 per device. UPSs are cheap in CAPEX but expensive in maintenance (due to the commercial model), which is why most telcos do the different design with AC-DC converters and batteries.
So, for the total VDSL picture it is worthwhile to consider not only the bandwidth issues, which are widely discussed, but also battery issues and the power supply to the FttC/FttN street cabinets and compare this with the FttH option (without the need for outside plant battery feeding). Another issue that needs to be taken into account is the short-circuit risks in those street cabinets.
Written by Paul Budde, Managing Director of Paul Budde Communication
Follow CircleID on Twitter
In support of National Cyber Security Awareness Month, DDoS Awareness Day is a virtual, global event focused on raising awareness and education around the threat of DDoS attacks. Hosted by Neustar with and exclusive media partner CSO, DDoS Awareness Day brings together top experts in global security to share their views, technical tips and from-the-trenches experience. Attendees will also be given access to a wealth of DDoS materials: white papers, surveys, presentations, best practices and more. (Click here to register for the event.)
Topics on the Agenda
DDoS Awareness Day will kick off at 9am London Time and coverage will continue through 6pm Pacific Time. Below are some highlights of the topics that will be discussed by the featured experts.
• DDoS Mitigation Explained
• DDoS 101
EMEA Market Trends and Customer Stories
• DDoS in the UK: What's really happening?
• Tales from the Trenches: Defending the Enterprise Against DDoS Attacks – A Real World Example
Expert Panel Discussions
• Expert's Panel Current Security Landscape — the Insider's View
• If we woke up evil…
• The Million-Dollar Gamble: Why Yesterday's DDoS Protection Could Cost Your Business Big
• State of Disruption — DDoS in 2012-2013
The Underground, Types and Tools
• The Underground Economy: DDoS and the Cyber Black Market
• Types of DDoS Attacks: A Primer — What you can do to prepare
• Reloaded: Attack of the Shuriken 2013
Some of the Featured Experts Include:
Senior VP and Technologist, NeustarGary Sockrider
Solutions Architect for the Americas, Arbor NetworksMichael Murray
Co-founder and Managing Partner of The Hacker AcademyAllison Nixon
Security Consultant, IntegralisMark Weatherford
Principal, The Chertoff GroupMark Bregman
Chief Technology Officer, NeustarJonathan Coombes
Chief Information Security Officer, NeustarDarren Anstee
Solutions Architect Global Team Manager, Arbor Networks
Update: Oct 4, 2013 – Video introduction by Rodney Joffe, Senior VP and Technologist at Neustar, for the October 23rd DDoS Awareness Day event.
Click here to register for the event.
Other sources: (UPDATED Oct 04, 2013 8:37 AM PDT)
Follow CircleID on Twitter
The ICANN Board has just announced its selections for the next Nominating Committee's leadership.
As a reminder, the Nominating Committee (NomCom) is designed to ensure skilled individuals go into key ICANN leadership position. Every year, its recruitment and selection process leads to appointments for positions on the GNSO (Generic Names Supporting Organisation — ICANN's policy-making body for generic domains), the ccNSO (country code Names Supporting Organisation) and ALAC (At Large Advisory Committee).
Moreover, the NomCom appoints 8 of ICANN's 16 voting Board members.
As befits such a crucial function, the NomCom's leadership is designed to ensure maximum efficiently. Each year, the ICANN Board selects a Chair and a Chair Elect. This second position will shadow the Chair for a year in order to be fully prepped for when it rotates into the Chair role the following year.
In addition, each year the Chair also chooses an Associate Chair, basically a Vice Chair position. The Associate Chair is normally the previous year's Chair, in order to ensure the experienced built up during that previous year is not lost on the next committee.
The Chair and Chair Elect positions are ICANN Board appointments requiring a full Board resolution to be approved.
For the 2014 NomCom, Cheryl Langdon-Orr, the 2013 Chair Elect, has been confirmed as 2014 Chair. Cheryl asked Yrjö Länsipuro, the 2013 Chair, to accept the Associate Chair role for this year. And I have been nominated as Chair Elect for 2014.
Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING
Follow CircleID on Twitter
CircleID: NJ Content Liability Law Ruled Inconsistent with Sec. 230 (just like in Washington and Tennesse)
Unfortunate problems give rise to unfortunate solutions.
Back in a time before most members of Congress or prosecutors knew that there was an Internet, there was Prodigy. Prodigy, as part of its service, ran family-friendly chat rooms that it moderated in an effort to keep kids protected from unfortunate content. In a different Prodigy chat room, some unknown third party said something apparently bad about an investment firm Stratton-Oakmont. Stratton-Oakmont didn't like that very much, and sued. But not able to reach out and touch the third party, Stratton-Oakmont sued the intermediary Prodigy. The court observed Prodigy taking discretion with what could and could not be posted in the family-friendly chat room, and determined that Prodigy was acting in an editorial capacity, was a publisher, and was therefore responsible for all content published on its service — including the negative third-party comment about Stratton-Oakmont.
Congress didn't like that very much. Congress had been warned that there was unfortunate content on the Internet. And Congress had been told that Prodigy, as a result of ifs efforts to make the Internet safer, was punished with liability. Congress was also told that it was next to impossible for online services to monitor the massive amounts of content that flowed through its pipes or is hosted on its servers. Therefore, Congress passed the Good Samaritan Provision, 47 U.S.C. § 230 (an amendment to the Communications Decency Act, which was in turn an amendment to the Telecommunications Act of 1996).
The Good Samaritan Provision established two principles: First, interactive online services (broadly defined) are not liable for third party content. Second, interactive services are not liable for actions taken to make the Internet safer. Sec. 230 has been wildly successful, has been described as the greatest Internet law, and as the necessary legal condition to make the interactive Internet possible (of course, back in the real good old day, when the communications networks were not liable for the content it carried, this was a tenant of 'common carriage.').
Unfortunately, as Miss Texas Teen USA observed in 1998*, "There's a lot of weirdos on the Internet." The Attorneys' General job is to fight those weirdos and the unfortunate things they do. In order to promote their unfortunate behavior, weirdos place ads on services like Craiglist, Backpage, and other online advertisement services. The Attorneys General want this unfortunate activity stopped, and since they have trouble sometimes reaching out and touching those weirdos, the Attorneys General reach out and touch the intermediary online services. The Attorneys General have tried very hard to change the rules, to change Sec. 230, and to make online services liable for the unfortunate content of third-party weirdos, out of the belief that this will somehow make things better.
The Attorneys General reached out to state legislatures and convinced them that something needed to be done. And therefore several states passed laws that would make online services liable for third-party weirdo advertisements of unfortunate things. These states include Washington, Tennessee, and New Jersey. Online Services didn't like that very much — and they sued.
The Attorneys General lost in Washington and they lost in Tennessee. And now the Attorneys General have lost in New Jersey. And they lost big. In Washington, Backpage.com sued and a temporary injunction was immediately granted. Its request for a permanent injunction was granted after a hearing. The state of Washington agreed not to pursue the matter further and agreed to pay Backpage.com's attorneys fees.
Tennessee passed similar legislation. Backpage.com again sued and again received an injunction. The trial court wrote:
The Constitution tells us that when freedom of speech hangs in the balance — the state may not use a butcher knife on a problem that requires a scalpel to fix. Nor may a state enforce a law that flatly conflicts with federal law. Yet, this appears to be what the Tennessee legislature has done in passing the law at issue.
Tennessee agreed not to pursue the matter further and the entered into a final judgment invalidating the law.
But we're not done. In early 2013, New Jersey enacted legislation making a crime if,
the person knowingly publishes, disseminates, or displays, or causes directly or indirectly, to be published, disseminated, or displayed, any advertisement for a commercial sex act, which is to take place in this State and which includes the depiction of a minor;
This NJ law was modeled after the Washington law. And while the unfortunate content in question makes the heart cry of anyone who reads it, it does not mean that making interactive online services liable for the unfortunate content of third parties is coherent, feasible, effective, or consistent with the First Amendment.
Once again a federal court in Backpage.com v. John Jay Hoffman, Acting Attorney General of the State of New Jersey (D.N.J. Aug. 20, 2013) struck down the law. There are multiple problems with the NJ law.
First, when a state law and a federal law conflict, the federal law preempts the state law pursuant to the Constitution's Supremacy Clause. The state law would make interactive services liable for the content of third parties; the federal law 47 U.S.C. § 230 states that interactive services are not liable for third party content. The Federal law preempts the state law.
But there is a further Sec. 230 problem that the court highlights. Sec. 230 was designed to protect interactive services that seek to make their services safer. The NJ law would have made it a crime to knowingly publish unfortunate content. This creates an unintended and unwanted incentive on the part of interactive services to not know what they are publishing - or in other words, to take no steps toward making their services safe. Again, this is a conflict between the state law and the federal law, and the federal law trumps.
The NJ statute also runs afoul of the First Amendment. According to the First Amendment, to the extent that you actually can be liable for publishing content, you must knowingly publish that content. The statute as written, in addition to knowing publications, would make an online service liable if it, without knowledge, directly or indirectly, causes the content to be published, disseminated, or displayed. As Congress concluded with the passage of Sec. 230, interactive services have little ability to monitor, review, or know all the content that flows over, is hosted on, or is posted to their services. The NJ statute is unconstitutional to the extent that it would make interactive services liable for the posting of content of which they have no knowledge.
Second, the law is not the least restrictive means of achieving a compelling government interest (going after individuals engaged in abuse of children would be more effective and less restrictive, than indirectly going after intermediary communications services). Third, the NJ statute is filled with vague terms and overbroad requirements. Finally, the Court finds that the NJ statute would violate the Commerce Clause.
Unfortunate problems give rise to unfortunate solutions. Too often when confronted with unfortunate problems, those in authority feel that they must do "something," regardless of whether that "something" is such a good idea. Frequently the "something" is a thing that is immediate and visible, and gives a false sense of security. It gives the feeling that the government has acted, where in fact it has not - and it may have even made things worse.
There is no denying that there is darkness out there that needs to be confronted. But as Congress rightly determined almost 20 years ago, attacking communications intermediaries for third party content is not the solution.
Written by Robert Cannon, Cybertelecom
Follow CircleID on Twitter
Symantec has disabled part of one of the world's largest networks of infected computers, according to reports today. About 500,000 hijacked computers have been taken out of the 1.9 million strong ZeroAccess botnet. The zombie computers were used for advertising and online currency fraud and to infect other machines. Security experts warned that any benefits from the takedown might be short-lived.
Follow CircleID on Twitter
Last week, I had the privilege of presenting at the Digital Marketing & gTLD Strategy Congress in London on how to create a TLD strategy and activate your path to market for launch.
Some of the best and brightest minds in the industry attended and it was encouraging to hear from major brands such as Phillips, Microsoft, Google and KPMG, as well as a variety of other applicants.
While in my previous blog I discussed why a .brand TLD strategy is important, let's now delve deeper into engagement strategies and why this is the key to a successful .brand.
Why do I need internal engagement?
Internal engagement is a critical element of a TLD strategy because your .brand TLD is going to impact every aspect of your organisation. From technology to marketing and even customer service, everyone in your organisation needs to be engaged in your TLD strategy at differing degrees.
While you may have already engaged key decision makers during the process of applying for a new TLD, many haven't sought the necessary strategic input across the organisation — something that is extremely challenging for multinational enterprises (and for some of their consultants!!).
You have to appreciate that how one department approaches your .brand TLD might be different to another department.
However, done correctly, your TLD strategy is the perfect mechanism to align key department's .brand aspirations with your organisational goals.
Who should you engage internally?
Ideally, the critical areas of your business to target are your C-Suite executives, IT infrastructure and systems teams, digital, brand, legal and marketing departments. This is where the key decision makers lie who can make or break your .brand.
You should also consider bringing in the finance department, PR and internal communications teams, and any agency support your organisation receives from digital, branding and advertising specialists.
Finally, don't forget that even though you are a .brand, you'll need to engage your Registrar too (if you haven't already done so).
Remember, engaging with some internal audiences might be a challenge because there are still people out there that don't know anything about new TLDs.
Adopting a .brand is a massive change for any organisation.
It's important to remember that change is never easy and often clouded in risk as people intuitively resist transformation.
This is why your TLD strategy serves two purposes: 1) To provide purposeful direction in the launch of your TLD; and 2) To act as a mechanism to engage internally and gain the support of your key stakeholders.
The reality is that you're not only taking ownership of your .brand strategy, you will also be seen as the change facilitator. Leaders of large change programs must take responsibility for generating the critical mass movement in favor of the change. This requires more than mere buy-in or passive agreement; it demands complete ownership of the entire change process.
The five steps
I detail these steps in far greater depth during our TLD strategy workshop sessions. At a high level, below are the five key elements you should consider as part of internal engagement for your TLD strategy:
1. De-risk – A successful TLD strategy will need to take a 'whole of business' approach if it's to be effective. Remove the target from your back by involving key stakeholders early and de-risk your .brand TLD investment.
2. Get support from your TLD advisors – Get support from your trusted TLD advisors to guide you through the process. There's no need to reinvent the wheel.
3. Secure budget – You've made an investment in a core piece of Internet infrastructure. Now it's time to activate this investment. Engage internally to make a business case to secure budget.
4. Get internal resources – You can't do this yourself. Collaborate and consult with key stakeholders in all departments to share the load. It's often far more effective to have others champion the cause for you.
5. Align with corporate goals – Does your .brand TLD strategy reflect your organisation's mission, vision and values? Now's the time to engage every department to get collective buy-in.
You're building something from scratch and you need to get your plans in place. Internal engagement is the key to successful project planning and management.
Think about the construction of a house. You would never build a new house without detailed plans.
Similarly, with the creation of your TLD strategy, you should facilitate constructive internal engagement so you can build a plan that provides visibility across all facets of your business operations — and provide a digital platform for your organisation for many, many years to come.
Written by Tony Kirsch, Senior Manager - International Business Development at ARI Registry Services
Follow CircleID on Twitter
More under: Top-Level Domains
At the recent Anti-Phishing Working Group meeting in San Francisco, Rod Rasmussen and I published our latest APWG Global Phishing Survey. Phishing is a distinct kind of e-crime, one that's possible to measure and analyze in depth. Our report is a look at how criminals act and react, and what the implications are for the domain name industry.
This report seeks to understand trends and their significance by quantifying the scope of the global phishing problem, specifically all the phishing attacks detected in the first half of 2013. Here's some of what we found.
Phishers seek out vulnerable resources. There were at least 72,758 unique phishing attacks worldwide, occurring on 53,685 unique domain names. Most of those domains were hacked — on vulnerable web servers that the phishers broke into. In fact, 27% of all phishing attacks recorded worldwide involved mass break-ins at vulnerable hosting centers. Breaking into such hosting is a high-yield activity, and fits into a larger trend where criminals turn compromised servers at hosting facilities into weapons. We see such servers being utilized for all manner of abuse beyond phishing, ranging from underground proxy networks to large-scale DDoS attacks.
Like other criminals, phishers seek new markets. Phishing is exploding in China, where the expanding middle class is using e-commerce more often. We identified 12,175 domain names that we believe were registered maliciously, by phishers. Of those, at least 8,240 (68%) were registered to phish Chinese targets: services and sites in China that serve a primarily Chinese customer base.
Anti-abuse responses work. Some TLDs have anti-abuse monitoring and response programs, and the data shows that phish in these TLDs are taken down much more quickly, thereby saving a lot of victims. Phishing that uses URL shorteners is also way down, because companies such as BIT.LY and T.CO have implemented good monitoring programs, which has driven phishers away. And a lack of abuse monitoring becomes really noticeable in some TLDs, which suffer high levels of abuse.
Sometimes the data explodes common perceptions. Phishers don't cyber-squat much. Just 2.3% of all domains that were used for phishing contain a relevant brand name or reasonable variation thereof (often a misspelling). Instead, phishers usually register nonsense strings. Placing brand names or variations thereof in the domain name itself is not a favored tactic since brand owners are proactively scanning Internet zone files for their brand names. Instead, phishers often place brand names in subdomains or subdirectories.
And while people used to worry that IDNs would be used widely to fool people into visiting look-alike domains, the actual occurrence is vanishingly small. In seven years of research, we have found only eight IDNs domains used for homographic attacks, out of the hundreds of millions of domains registered during that time. It's a reminder that a security vulnerability can be measured by its actual impact.
If you're a registry operator, a new TLD applicant, a registrar, reseller, or responder, take a few minutes to read the report. It's a good way to know the enemy, and to protect your company and your users.
Written by Greg Aaron, President, Illumintel Inc. and Co-Chair of the APWG's Internet Policy Committee
Follow CircleID on Twitter
United States ranks 24th worldwide in the percentage of residents who use the Internet, according to the International Telecommunications Union's 2013 State of Broadband Report, released recently at a meeting of the Broadband Commission for Digital Development. Eighty-one percent of U.S. residents use the Internet, the ITU said.
Countries with the highest percentage of people using the Internet was Iceland, where 97 percent of the people are Internet users. The top 10 countries all had usage rates above 88 percent.
Percentage of Individuals Using the Internet, Worldwide, 2012 – Source: The State of Broadband 2013 - Universalizing Broadband by the Broadband Commission, September 2013 (e - ITU Estimates)
Follow CircleID on Twitter
Life outside IPv6 by gogo6 (Click to Enlarge)Within every organization a chosen few are tasked with introducing IPv6 into their networks, outward facing services or applications. But who are they? We know them as Network Engineers, System Administrators and Software Developers but are they one trick ponies spending all their time in layer 3?
As a proxy for the market we turned to the gogoNET community of 95,000 networking professionals and polled them in early September on their work life outside of IPv6. Based on 703 responses, we now have a better idea of who they are and what else they do.
At a high level the professionals in charge of IPv6 are senior employees also involved in other areas of advanced networking. At the top of the list is networking security where 67% of professionals responsible for IPv6 are also working on networking security. The rest of the list in descending order includes network management, DNS, virtualization, core infrastructure, cloud computing, edge infrastructure, load balancing, application delivery, unified communications, IPAM and finally SDN where 15% of IPv6 implementers are also working on software defined networks. Click on the infographic for more details.
To see the full results of this poll and others, go to the gogoNET polling area and while you're there, take our poll on the controversial subject of Carrier Grade NAT.
Written by Bruce Sinclair, CEO, gogo6
Follow CircleID on Twitter
More under: IPv6
Gurstein's Community Informatics: Snowden isn’t just about Surveillance. It is much, Much, MUCH worse…
As we draw closer to the first new gTLD registry launch, many companies are beginning the arduous task of developing their new gTLD registration and blocking strategies. And after speaking with dozens of clients, I can tell you that the planned approaches are ranging from very minimal registration and blocking strategies for one or two core brands, all the way through to registrations of multiple brands in every single new gTLD registry.
Generally speaking, we are recommending that companies look to register only exact-matches of their core trademarks in registries where there is a close correlation between the brand and the TLD. For example, financial institutions should consider registering in TLDs such as .bank, .loan(s), and .mortgage. Identifying these kinds of close matches is easy, especially given that the number of open and restricted TLDs is just under 620.
More difficult to answer for most companies is where to register in non-Latin TLDs. When it comes to non-Latin registrations, we recommend that companies make best efforts to understand how brands are marketed internationally. If they are marketed using non-Latin characters, then consider registering in the new IDN (Internationalized Domain Names) TLDs assuming that there is a nexus between the brand and the TLD. However, we strongly discourage mixing character scripts and do not encourage registering Latin second-levels with non-Latin top-levels.
Companies are also having to make difficult decisions about whether it makes sense to register in any of the city or geo TLDs. In this situation, we ask companies to think about whether they are actively marketing or promoting their brands in these cities or regions.
In addition, there are certain categories of registries which pose their own special risks including gripe (.wtf and .sucks), vice (.sex and .poker), corporate identifier (.inc and .gmbh) and charitable (.foundation and .charity) TLDs — and companies must determine their tolerance for risk when planning their registration and blocking strategies around these.
And finally, there are all of the truly generic new gTLD registries like .web, .blog and .news — and again there are difficult decisions to be made, as there is no one-size-fits-all when it comes to developing a registration and blocking strategy.
At this point we are recommending that companies do their best to understand this new environment and that any strategy developed should provide general guidelines only. Actual registration and blocking decisions should take into account many factors which are not yet known, such as timing, price, special eligibility requirements or RPMs, distribution channels and marketing support.
And as I've mentioned multiple times before, trying to register every variation, typosquat or misspelling in this new environment as a method for protecting brands will be cost prohibitive. Policing for abuse and taking action where it makes sense will be key to identifying and addressing abuse.
Written by Elisa Cooper, Director of Product Marketing at MarkMonitor
Follow CircleID on Twitter
It's late in the new gTLD day and the program looks to be inching ever closer to the finish line. Yet last minute hiccups seem to be a recurring theme for this ambitious project to expand the Internet namespace far beyond the 300 odd active TLDs in existence today (counting generics and country codes). A drive for growth which is already underway, with 63 gTLD contracts now signed as of mid September. The list includes incumbents like .COM, of course, but also a spate of the first of the new strings that are set to be commonplace for tomorrow's Internet users.
But will those users find themselves at greater risk because of this namespace expansion? That's what several parties have been asking in recent months. Not that increasing the number of gTLDs is inherently dangerous. No, the risk would appear to be with the way it's currently being done if you listen to ICANN's Security and Stability Advisory Committee (SSAC) who has put out several reports such as SAC045 and SAC046 recommending action be taken to mitigate the risk. One of the most recent prods from SSAC was SAC059, which was published on April 18, 2013, and underscored the need for additional interdisciplinary study.
Others have chimed in. Another ICANN body, the At Large Advisory Committee, also called for a more determined risk mitigation strategy. Most recently, in response to ICANN's proposed name collision risk mitigation strategy, several companies and organizations are also voicing their concerns, including Verizon, Microsoft and Yahoo, the United States Telecom Association and the Online Trust Alliance.
But most noticeable of all, just because they are the incumbent of all incumbents to the domain ecosystem, was Verisign's appeals for ICANN to act. In March of this year, the .COM registry wrote to ICANN CEO Fadi Chehadé with a study on "new gTLD security and stability considerations” calling for risk mitigation actions. Just recently, Verisign has followed this up with various comments, such as this one on the ICANN proposal to mitigate the risk of name collisions created by the delegation of new gTLDs. Verisign also submitted analysis that contradicted statements by .CBA applicant Commonwealth Bank of Australia that its TLD is safe. As if that wasn't enough, the company followed this up with analysis on other applied-for TLDs in its drive to "illustrate the need to undertake qualitative impact assessments for applied-for strings.”
In short, Verisign is saying that to launch new gTLDs now would be tantamount to jumping off a cliff and then seeing if, by some stroke of luck, one might then sprout wings and fly away.
Others are saying this is, at best, needlessly alarmist, at worst a protectionist play from the company that has the most to loose from new strings coming to market and taking aim at .COM's existing dominance.
Rather than condemning Verisign outright, I wanted to try and understand what their problem with the current state of the new gTLD program really is. So I spoke with their Chief Security Officer, Danny McPherson, making it clear that I would use his answers in this article. My questions were really about trying to understand what's behind the alarm bells, and whether they should be heard, or just silenced as we stand aside and let innovation stride forth. Here are some excerpts from the 30-minute telephone conversation I had this week with Danny.
* * *
SVG: Isn't all this really about Verisign protecting its own interests?
DMP: Well, we've certainly heard that a lot. Verisign has many roles in relation to the new gTLD program. That includes applicants who represent approximately 200 applied-for new gTLDs, and have contracts with Verisign to supply back-end registry services. As you can imagine, this is not an easy line for Verisign to walk with them.
But the substance of what we've talked about on the technical side hasn't been dismissed by anyone since our March report that highlighted the need to address the SSAC recommendations. For the most part, the recommendations we've made are simply re-iterations of things that SSAC and other ICANN commissioned experts have recommended. Yet we seem to be the only ones that want to hold anyone accountable for delivering on those. However, others are now realizing the problem with ICANN's approach and have filed comments with ICANN.
Verisign is not only the operator of the A and J root servers. We also have a unique role as zone publisher, and in that role, we're actually the ones that provision these new gTLDs in the root zone file and publish that to all the root operators. As part of our cooperative agreement for that, we have security and stability obligations. Those not only extend to the root system itself, but also to looking at what the consequences of doing something bad are. So we have an obligation to look at this and look at what other experts have said and what our own experts think.
We have done that and we see a lot of outstanding issues. We have to be concerned about security and stability obligations. It isn't just about protecting ICANN or SSAC or the root server system itself, it's also about what the consequences to users are. Are there going to be new exploits or vulnerabilities because some string is delegated? Is it potentially going to cause disruption to some piece of infrastructure? Or is it going to make some element of the network less stable or predictable?
It seems as though everyone's lost sight of that and isn't worried about the consumers or the long-term effects of this. This is part of the reason we did the CBA analysis. It's about highlighting that there's a whole array of attributes, something we call the "risk matrix," that need to be considered because we know each one of these represents some level of risk and may result in an actual threat if we have a motivated, capable adversary.
SVG: Verisign aren't the only capable registry and infrastructure manager around. Neustar also have a proven track record yet they seem to be in strong disagreement with your assessments. Have you looked at their analysis?
DMP: Definitely. The reality is that the data used by anyone that's done data analysis today was mostly based on the DITL data, and that is a 2 day snapshot taken earlier this year. If you're going to use occurrence or incidence of something as a measure, then do it over a reasonable time frame and data set.
I believe we should only be using 2 classifications for strings right now: known high risk like .MAIL, .CORP and .HOME… and uncertain. I think anything we base upon the DITL data and a 2 day snapshot is going to be inaccurate. You may have some level of precision within that data, but it's by no means going to be accurate. Don't draw an arbitrary line at 20% based upon what's secure and not secure in a 2 day snapshot of data, instead of using measurement apparatus across the system that allows you to do it intelligently and in a sustainable way. There needs to be objective criteria. Query count alone over a 2-day period is not objective. It's across a subset of the root system and doesn't consider other elements in the DNS ecosystem. So I don't think anyone with the DITL data set is qualified to make an objective decision about what constitutes risk and what doesn't.
I do see a lot of people saying strings are not risky when, quite frankly, they don't have the information to be able to make that judgement in a qualitative way. We have the Interisle report and the DITL data, but I do not believe these constitute a set of data that affords enough visibility for people to be able to draw those lines. That's the reason we published our CBA analysis and did that analysis over a 7-week period. We took data across a reasonable data set, only 15% or so of the root server system, but it was a more objective data set with a larger base. What we saw is illustrative of the types of systems that could be impacted by this. Those are the types of consumers that could be impacted by those delegations. These aren't things that should be subjective. You can measure this, but you have to take a step back and make it a point to get the right data and to define that objective matrix. Right now, this has not been done.
SVG: I see you have also studied other TLDs apart from .CBA?
DMP: Yes, we submitted analysis of .CLUB, .COFFEE and .WEBSITE. One was already classified as "uncalculated risk" by ICANN, but the others are in the 80% low risk category. Yet we showed a number of namespaces and regional affinities that query each of these strings. That's an example of precisely why we don't believe you can currently draw a line between uncalculated risk and low risk. Because there's not an objective matrix and you can't do it based on query volume.
SVG: But work analysis means more delay in the new gTLD program, doesn't it?
DMP: If something's worth doing, it's worth doing right. You can either take this approach of death by a thousand cuts, or you can step back and do this correctly. We realise ours is not a popular position. But we also know that it's a responsible one. Everything we've said stands on technical merit. No one's claimed it is technically inaccurate.
There are definitely some recommendations in SAC045 and SAC046 that need to be implemented. ICANN should have had a plan in place, so that on Reveal Day in 2012, it could have begun the process of forewarning potentially impacted parties of the impending delegation of new strings so these folks could mitigate against the potential impact that the delegation of a particular string may have on their operating environment.
Other recommendations pertain to the ability to protect the root server system itself to make sure all the root operators are performing to par, and if any negative consequences are experienced as a result of the delegation of a new gTLD, that the root zone partners have a way to quickly back that out, or at least assess the problem. If there are impacts, we ought to have some visibility to that, some early warning capability. These are all prudent steps any engineer would want to take. These things still haven't been done. We still haven't made an intellectually honest approach and a sound engineering approach to solving these issues.
While ours is not a popular position to be in, we believe it's the right position. There seems to be this romantic notion that these things happen magically and there's no need to worry about the impact to consumers. The reality is there's a lot of risk today. We have 3 billion Internet users and hundreds of billions in commerce in the U.S. alone that are based on the Internet. So the consequences of not doing this could be much worse for ICANN and the community, and the Internet, than stepping back and doing it properly.
The sooner ICANN takes responsibility and recognises that these outstanding issues are unresolved and need to be resolved, and this was outlined by ICANN's own advisory committees in good faith and with good reason, the better. The sooner these steps are taken, the sooner new gTLDs can be delegated responsibly. If ICANN and the community had heeded our call back in March, six months ago, we'd probably be done with this and much closer to seeing new gTLDs in the marketplace.
Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING
Follow CircleID on Twitter
The difficulty of applying a hierarchically organized PKI to the decentralized world of Internet routing is being fully exposed in a new Internet-draft. The document represents a rational response to an RPKI that closely ties address resources to a handful of Internet governance institutions, nicely illustrates how governments and national security policy are influencing Internet security, and portends substantial costs for network operators and beyond if adopted widely.
To start, a quick reminder of the three informational components that together comprise RPKI. First, there are the statements created by network operators that authorize route origination and are contained in the RPKI. Second, there are the certificates issued by the RIRs and other parties in the RPKI hierarchy that are used by relying parties (i.e., other network operators) to validate the authenticity and integrity those statements. Finally, there are public keys used by relying parties to validate the chain of certificates in the PKI, starting with a trust anchor. In theory, these pieces of information can be used together by network operators to help prevent unauthorized routing.
The Internet-draft, entitled Suspenders: A Fail-safe Mechanism for the RPKI, was authored by participants affiliated with longtime U.S. defense contractor BBN/Raytheon. It describes a system for protecting against "inappropriate" changes to the data in the RPKI. The motivation for it was presented in a set of slides at IETF 87 this summer:
A nation might worry that some entity in the resource allocation hierarchy could (accidentally or maliciously) revoke a certificate for critical infrastructure resources (in that nation, or elsewhere)
A nation can protect nets within its administrative jurisdiction against such mishaps IF it can direct internal nets to rely on a national authority for RPKI for these critical infrastructure resources
If the country could externally declare the ROA [route origin authorization] data for its ISPs, that would be even better (subject to appropriate controls).
To accomplish this, Suspenders proposes a LOCK record stored in the RPKI which points to an externally hosted Internet Number Resource Declaration (INRD) file. The INRD file, validated independently of the RPKI, can be used by a relying party to corroborate the routing origin authorization information stored in the RPKI. If the data does not match, the relying party makes a determination of whether or not to trust information in the RPKI and routes accordingly.
Practically, Suspenders decouples the publication and validation of routing origin authorization information from the RIRs. In the colorful words of the authors, it eliminates under specific conditions the threat of a certificate authority accidentally or deliberately "whacking" an ISPs route origin authorizations by rescinding a certificate. And the threat of "whacking" is real from technical, policy and legal perspectives. According to work done at Boston University to be presented at Hotnets, the revocation of certificates could have significant extraterritorial implications for routing. While the RIRs have deployed RPKI as an opt-in service with terms and conditions to which subscribers must adhere, bottom up policies governing certificates used in the RPKI have not been developed. And it's legally uncertain how RIRs would respond to LEA requests to revoke certificates, but we already know one RIR has no legal standing in objecting to certain requests about registry data.
So, now we know at least one U.S. government agency is concerned with the flip side of security that RPKI enables, i.e., control of routing. The motivation for Suspenders is generally consistent with larger national-security oriented policy objectives, e.g., the 2013 Executive Office order and Commerce Dept recommendations, which are concerned with protecting U.S. "critical infrastructure," much of which happens to run over networks using the Internet protocol. Apparently, the motivation is shared by more than one government. Engineers at the Chinese Network Information Center (CNNIC), according to one author, expressed concerns about foreign influence and the RPKI and helped to refine the work.
Unsurprisingly, nation-states aren't very interested in global Internet governance when it potentially impacts their critical infrastructure. Therefore they adapt. In some sense, Suspenders is the emergence of "separate system policy" for governments, similar to the experience with DNSSEC. Nonetheless, network operators shouldn't expect pressure to use the RIR's RPKI to abate. A single rooted RPKI will continue to be cautiously advocated by the institutions that stand to benefit from it, namely the RIRs and ICANN.
Possible fallout from Suspenders
It is still just a draft, but if Suspenders is standardized and deployed it could come with substantial cost depending on where you sit. For one, it could provide an avenue for governments to exert direct control over what network operators route. In the authors' words:
For example, Elbonia might mandate that every INR holder within the country make use of Suspenders. Every Elbonian INR holder will be required to include a LOCK record in its publication point [within RPKI], no matter where that publication point is realized. The URL in each LOCK points to a file on a server managed by an Elbonian government organization.
Another concern is the cost is related to validating numerous INRD files required by governments or used by network operators with similar concerns about the RPKI. Similar to DNSSEC's islands of trust problem, disparate INRD files introduces more complexity for operators. Federations of operators might emerge, but these would similarly require coordination within and between them to maintain seamless validation and secure routing.
To date, reaction from the Working Group reviewing the draft has been muted. It will be interesting to see where the various interests come down and how influential the supporters of the draft turn out to be. It may be possible to accommodate governments concerns using Suspenders, but it is also perfectly legitimate to question how closely routing security should be tied to a territorial view of the world.
Written by Brenden Kuerbis, Fellow in Internet Security Governance, Citizen Lab, Univ of Toronto
Follow CircleID on Twitter
Day one of the Digital Marketing and gTLD Strategy congress is happening in London today. As we inch ever closer to new gTLDs actually launching on the Internet, business models and marketing approaches are becoming clearer and better defined. This was evident in today's presentations and workshops, with applicants and current TLD operators alike showing much greater depth of thought into how these namespaces might actually be of use to Internet users.
That's often been missing from new gTLD discussions in the past, and for good reason. The focus up to now was much more on actually navigating the program's rules and getting past the "stumbling forward" effect. But now that the program is nearing completion, it's time to think about how to market TLDs that might actually be real in a few months.
If the London congress is anything to go by, domain industry players are now looking hard at making sure people understand the benefits of their domain. The phrase "changing the value proposition" was used in today's presentations to highlight the drive to make sure users don't just by the cheapest TLD on a registrar's drop down menu, but instead aim for the product that best fits their needs.
TLD operators are clearly behaving more and more like any other business. They are taking the time to identify the communities that might best promote their TLDs, pandering to early adopters for example and working with them to hone the TLD and get it to the point where the mass market can be offered a product that is made to fit their expectations.
Strategies for TLD growth are also about generating awareness, with TLD operators looking to be involved with key events that might speak to their target audiences, or working with partners such as registrars to produce advertising to help support a TLD's awareness campaign.
Judging by what we heard in London today, the domain industry is maturing fast and moving away from the "domain complex" that's been in evidence in the past, where it would almost be apologetic because of the perceived complexity of the domain name ecosystem. Nowadays, domain players, registries and registrars alike, are more confident pitching the TLDs they offer as, basically, "just another product". For example, we heard from the .CO operator who runs a membership program, with perks and benefits for registrants, and a referral program.
This more mature approach is bound to reap benefits for the domain ecosystem as a whole, from registrants to the registrars and registries that service them, as the industry evolves to focus on the users themselves, with domains as products taking a back seat to that.
Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING
Follow CircleID on Twitter