In her UN General Assembly speech denouncing NSA surveillance, Brazil’s President Dilma Rousseff said:
Information and communications technologies cannot be the new battlefield between States. Time is ripe to create the conditions to prevent cyberspace from being used as a weapon of war, through espionage, sabotage, and attacks against systems and infrastructure of other countries. … For this reason, Brazil will present proposals for the establishment of a civilian multilateral framework for the governance and use of the Internet and to ensure the protection of data that travels through the web.
We share her outrage at mass surveillance. We share her opposition to the militarization of the Internet. We share her concern for privacy.
But when President Rousseff proposes to solve these problems by means of a “multilateral framework for the governance and use of the Internet,” she reveals a fundamental flaw in her thinking. It is a flaw shared by many in civil society.
You cannot control militaries, espionage and arms races by “governing the Internet.” Cyberspace is one of many aspects of military competition. Unless one eliminates or dramatically diminishes political and military competition among sovereign states, states will continue to spy, break into things, and engage in conflict when it suits their interests. Cyber conflict is no exception.
Rousseff is mixing apples and oranges. If you want to control militaries and espionage, then regulate arms, militaries and espionage – not “the Internet.”
This confusion is potentially dangerous. If the NSA outrages feed into a call for global Internet governance, and this governance focuses on critical Internet resources and the production and use of Internet-enabled services by civil society and the private sector, as it inevitably will, we are certain to get lots of governance of the Internet, and very little governance of espionage, militaries, and cyber arms.
In other words, Dilma’s “civilian multilateral framework for the governance and use of the Internet” is only going to regulate us – the civilian users and private sector producers of Internet products and services. It will not control the NSA, the Chinese Peoples Liberation Army, the Russian FSB or the British GCHQ.
Realism in international relations theory is based on the view that the international system is anarchic. This does not mean that it is chaotic, but simply that the system is composed of independent states and there is no central authority capable of coercing all of them into following rules. The other key tenet of realism is that the primary goal of states in the international system is their own survival.
It follows that the only way one state can compel another state to do anything is through some form of coercion, such as war, a credible threat of war, or economic sanctions. And the only time states agree to cooperate to set and enforce rules, is when it is in their self-interest to do so. Thus, when sovereign states come together to agree to regulate things internationally, their priorities will always be to:
- Preserve or enlarge their own power relative to other states; and
- Ensure that the regulations are designed to bring under control those aspects of civil society and business that might undermine or threaten their power.
Any other benefits, such as privacy for users or freedom of expression, will be secondary concerns. That’s just the way it is in international relations. Asking states to prevent cyberspace from being used as a weapon of war is like asking foxes to guard henhouses.
That’s one reason why it is so essential that these conferences be fully open to non-state actors, and that they not be organized around national representation.
Let’s think twice about linking the NSA reaction too strongly to Internet governance. There is some linkage, of course. The NSA revelations should remind us to be realist in our approach to Internet governance. This means recognizing that all states will approach Internet regulation with their own survival and power uppermost in their agenda; it also means that any single state cannot be trusted as a neutral steward of the global Internet but will inevitably use its position to benefit itself. These implications of the Snowden revelations need to be recognized. But let us not confuse NSA regulation with Internet regulation.
In the midst of the overseeing the biggest change in the history of the Internet's global addressing system, ICANN President Fadi Chehade has inexplicably embarked on a high-stakes battle over the very future of his organization and its relationship to world governments — at the expense of the private sector's historical role in Internet governance.
Worse, Fadi's global government gambit could have serious repercussions for the future of the Internet.
Fadi is not the first ICANN president who sought to break ICANN's legacy links to the USA. But where previous ICANN leaders restrained themselves to rhetoric, Fadi is now neck-deep in a geo-political current where non-US governments are pushing for an end to the US role in assigning the IANA contract for allocating addresses and managing the DNS root.
What's not clear is where Fadi found the authority for this move, or whether he has fully explored the potential consequences of the changes he now embraces.
Earlier this month, Fadi joined with standards groups and nongovernmental organizations to release the Montevideo Declaration, calling for "accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing."
Then earlier this week at the Internet Governance Forum in Indonesia, Fadi joined with Brazilian Minister of Communications Paolo Bernado Silva to announce an upcoming "Summit" in Brazil to develop a more "democratic and inclusive" model of Internet governance.
All of this might sound pretty reasonable — to anyone who's been living under a rock for the past decade.
Brazil is no fan of an Internet governance model that lets non-governments have a say. Brazil is the first initial in BRIC, the alliance of nations led by Brazil, Russia, India and China that has campaigned for years to move authority over Internet functions from ICANN to the United Nations and the ITU.
So when Brazil stands shoulder-to-shoulder with ICANN and calls for "accelerating the globalization of ICANN and IANA functions" it is pretty clear that these would-be allies have two distinct, non-compatible ends in mind.
Governments leading the call for increased "globalization" have no use for an independent ICANN. For more than a decade, leaders of those states and their allies have made it clear that they don't trust a multistakeholder upstart to make the sorts of decisions that have been traditionally made by governments.
Standing against those efforts have been many of the same people who have made ICANN into the successful model that Fadi inherited a year ago. The businesses, academics, advocates, and technologists who have opposed governmental takeover of ICANN don't agree on many things, but we all agree that the private sector shouldn't be cut-out of a decision-making process that affects all of us.
Yet this is precisely the threat that Brazil and its allies pose. Once the ITU or some other intergovernmental body gets its hands on the ICANN keys, private sector interests will find themselves in the back of the room, with no votes to cast.
Either Fadi believes that Brazil's quest for government control of ICANN will ease when the US hands over IANA, or he's positioning ICANN to survive under intergovernmental rule. Neither option is encouraging for those of us in the private sector.
One also has to wonder where ICANN's vaunted "community input" is in all of this. I am an officer of ICANN's Business Constituency, but I don't recall any discussion of the "Montevideo Declaration", nor can I find any record of a Board vote on the move. Perhaps Fadi's renovation of ICANN has already underway, and the first thing to go was the multistakeholder model.
If I had the chance to comment as a member of the ICANN community, I'd have argued that the IANA functions contract should continue as something ICANN "earns" through periodic reviews. Why? Because while IANA functions don't add much administrative burden, they are absolutely vital in two ways.
First, the IANA contractor has to maintain security, stability, and resiliency (SSR) of the DNS root, even while expanding that root for lots of new gTLDs. ICANN has to balance its SSR responsibility against its need to quickly launch new TLDs that will fund its ballooning budget.
Second, the need to re-earn IANA every few years is all that keeps ICANN from walking away from the Affirmation of Commitments — the only document holding ICANN accountable to the community it serves, including users, governments, the private sector, and civil society. Reviews for the IANA Contract are a powerful reminder that ICANN serves at the pleasure of global stakeholders and has no permanent lock on managing the Internet's name and address system.
So the question remains, why is Fadi making these moves at this time? ICANN sits at a critical inflection point as it adds hundreds of new top-level domains to the Internet. The eyes of the world are on ICANN as never before, and the stakes for "controlling" the DNS grow with every new TLD that is delegated.
Also, If ICANN's first responsibility remains, as it should, ensuring the security and stability of the DNS, how do we justify this dangerously destabilizing foray into the shark-tank of politics known as the United Nations?
ICANN is not a typical company, and its CEO does not have a typical role. Fadi is the steward of a global trust, not the leader of the global Internet. "Re-thinking" the fundamental structure of ICANN is not — and should not be — in his job description. If we're going to make changes that affect the stability of the DNS upon which we all rely, we should do it as a community.
Written by Steve DelBianco, Executive Director at NetChoice
Follow CircleID on Twitter
It is not over. Despite overwhelming majorities in favour, expressed in a vote held on October 21 in the Civil Liberties and Justice Committee (LIBE), the draft EU data protection regulation and the directive for data protection for law enforcement, still has a bumpy road ahead. Both rapporteurs, Jan Philipp Albrecht (Green Party) and Dimitrios Droutsas (Socialists & Democrats) in a press conference on October 22 welcomed the LIBE vote, which gave them a mandate to start negotiating with the Council of Ministers. Yet, data protection experts, rights activists and selected members of Parliament had hoped for more.
The LIBE Committee had set aside full four hours to conclude the fat dossier, but surprisingly, a little more than half-an-hour sufficed to do the job: 51 committee members voted for the compromises on the regulation carved out by Albrecht with the shadow rapporteurs, while one opposed and three abstained. The compromises on the directive, which covers data protection for law enforcement and judicial matters, were more controversial (29 for; 22 against; 3 abstentions).The good...
A strengthening of individual rights of users, better transparency, a right for them to be informed about data collected about them and even the obligation for providers to erase data at users' will, were core points underlined by Albrecht.
On the positive side, he said, all actors on the EU market including those headquartered in non-EU countries had to adhere to the EU data protection laws. If third country companies did not follow EU data protection law, sanctions of up to five percent of the turnover (revenue) would be levied.
At the same time, exemptions from obligations were made for companies with less than 5,000 customer contacts per year. They do not need a data protection officer. “We need to be strict with the giants,” Droutsas said, “as they could do nasty things with personal data and they do.”...the bad...
The rapporteurs also acknowledged they would have liked to get some additional features, starting from a uniform data protection regime for the private and public sector including law enforcement. The directive will allow member states to implement their own version of the legislation within a set of minimum standards. Data transfers within the EU therefore could again mean, that citizens from a country with higher standards might lose some protection when data is transferred beyond the border of their country.
Activists were close to furious over what they say are huge loopholes in the regulation. La Quadrature du Net is concerned that provisions as the following one might make protection ineffective:
“processing is necessary for the purpose of the legitimate interests pursued by a controller or in case of disclosure, by the third party to whom the data is disclosed, and which meet the reasonable expectations of the data subject based on his or her relationship with the controller, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data.“ (Article 6 recital 38 according to an unofficial final draft).
Article 6 also includes flexibility for “the processing of personal data for the purpose of direct marketing,“ which should “be presumed as carried out for the legitimate interest of the controller.” The example illustrates, it is no easy read for citizens.
La Quadrature, European Digital Rights (EDRI) and others also warn against potential harm with regard to allowing the less restricted processing of pseudonymous and anonymous data by companies.
More discussions concern a lighter touch to pseudonymous data (recital 38, article 6). Profiling “based solely on the processing of pseudonymous data should be presumed not to significantly affect the interests, rights or freedoms of the data subject.“ (Recital 58 a, article 20)....and the ugly
Although limitations accompany these provisions, groups such as European Digital Rights warn that the vote if upheld “would launch an 'open season' for online companies to quietly collect our data, create profiles and sell our personalities to the highest bidder.”
“Despite almost daily stories of data being lost, mislaid, breached and trafficked to and by foreign governments, our elected representatives adopted a text saying that corporate tracking and profiling of individuals should not be understood as significantly affecting our rights and our freedoms“, EDRI wrote in their take on the vote.
“Most companies cite ‘legitimate interests’ as a valid reason to hoover up more data than required, such as for example, Google’s pooling of all information on users of many disparate services,” Monique Goyen, Executive Director of the EU consumer organisation BEUC writes. Users and rights organisations had to be vigilant that “legitimate interests” did not “become the legal loophole of the new regulation.”
The transfers of data to third countries could be eased by a EU privacy seal or even corporate binding rules (article 42). Such a regulation would not be FISA proof, experts warn. On the other side of the spectrum, concerns were also presented by large companies in the digital market immediately after the vote was cast. The European Digital Media Association (EDiMA), an industry association including Microsoft, eBay, Amazon, Google and Apple warned against a rush to push through the regulation, as it needed still “considerable discussion.”Passage before EU elections prefered by Parliament, Commission
Both rapporteurs received the overwhelming support to go directly into negotiations with member states and the European Commission. They both pointed to the upcoming meeting of the European Council in Brussels at the end of the week as a first touch point where member states could commit themselves to a quick start on the trilogue. Member states could now align themselves with the declared goal to strengthen data protection in “post-Snowden” times.
If the Parliament, the Commission (for which Viviane Reding, Vice-President of the European Commission, highly welcomed the Committee's decision) and member states could agree, the package could pass in a quicker first reading procedure. Otherwise, it might be shuffled back to after the EU elections next year. Time was of essence, Droutsas and Albrecht argued, defending the informal trilogue which had been heavily criticised by Jérémie Zimmermann from La Quadrature du Net who warned that the “text will now be modified behind closed doors,“ running a risk that member states might annihilate “all positive provisions“ reached so far.
The architecture of a networked system is its underlying technical structure, designed according to a “matrix of concepts” (Agre, 2003). It constitutes the logical and structural layout of a system, including transmission equipment, communication protocols, infrastructure, and connectivity between its components or nodes. This article introduces the idea of network architecture as internet governance1, and more specifically, it outlines the dialectic between centralised and distributed architectures, institutions and practices, and how they mutually affect each other.
Technical architectures, as argued by several authors discussed in this article, may be understood as alternative ways of influencing economic systems, sets of rules, communities of practice – indeed, as the very fabric of user behaviour and interaction. The status of every internet user as consumer, sharer, producer and possibly manager of digital content is informed by, and shapes in return, the technical structure and organisation of the services she has access to. It is in this sense that network architecture is internet governance: by changing the design of the networks subtending internet-based services, and the global internet itself, the politics of the network of networks are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.Architecture, “politics by other means”
“Study an information system and neglect its standards, wires, and settings, and you miss equally essential aspects of aesthetics, justice, and change,” once wrote science and technology studies (STS) scholar Susan Leigh Star (Star, 1999, p. 339). Indeed, the history of internet innovation suggests that the shaping of technical architectures populating the network of networks is, in the words of philosopher Bruno Latour, “politics by other means” (Latour, 1988, p. 229). The ways in which architecture is politics, protocols are law, code shapes rights (e.g., Lessig, 1999; DeNardis, 2009), are explored today by a number of different authors in relation to networked and online media; in particular, internet-related research has contributed to foster the debate on the intersection and overlap of governance by architecture with other forms of governance. This section, while not pretending to be exhaustive, discusses some key approaches to the question.
Interested in the relationship between architectures and the organisation of society, Terje Rasmussen (2003) has argued that there is a structural match between the development of the technical model of the internet (such as packet switching and distributed routing) and the transformation of the societies in which it operates. In this account, the technical infrastructure of the Internet suggests that ours is a distributed society, based on the ability to handle risk, rather than on central control. On the other hand, information studies scholar and internet pioneer Philip Agre suggests that “Decentralized institutions do not imply decentralized architectures, or vice versa. [...] Architectures and institutions inevitably coevolve, and to the extent they can be designed, they should be designed together” (Agre, 2003, p. 42), but they are not “naturally” related.
IT law scholar Barbara van Schewick seeks to examine how changes, notably design choices, in internet architecture affect the economic environment for innovation, and evaluates the impact of these changes from the perspective of public policy (2010, p. 2). According to her, this is a first step towards filling a gap in how scholarship understands innovators’ decisions and the economic environment for innovation. After many years of research on innovation processes, we understand how these are affected by changes in laws, norms, and prices; yet, we lack a similar understanding of how architecture and innovation impact each other, perhaps for the intrinsic appeal of architectures as purely technical systems (ibid., p. 2-3). Traditionally, she concludes, policy makers have used the law to bring about desired economic effects. Architecture de facto constitutes an alternative way of influencing economic systems, and as such, it is becoming another tool that actors can use to further their interests (ibid., p. 389).
The relationship between architecture and law-making for networked media has been an increasingly central interdisciplinary preoccupation since the late 1990s/early 2000s. Early uses of the metaphor “code is law” can be found in William Mitchell’s City of Bits (1995) and in Joel Reidenberg’s article on lex informatica, the formation of information policy rules through technology (1998). However, legal scholars Yochai Benkler and Lawrence Lessig have arguably been the “scene-setters” in this field, with their work on sharing as a paradigm of economic production in its own right (2004) and technical architecture as politics (1999), respectively. While the former argued for the rise of a “networked information economy” as a system of “production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means” (Benkler, 2006), the latter introduced technical architecture as one out of the four main (and interconnected) society regulators, the other three being law, market and norms. The application of this principle to the text of computer programmes led to what remains, perhaps, the most striking incarnation of the famous “code is law” label (Lessig, 1999).
Among the scholars that have since been inspired by this line of inquiry, Niva Elkin-Koren is especially relevant. In her work (e.g., 2006, 2012), architecture is understood as a dynamic parameter in the reciprocal influences of law and technology design, in the field of information and communication systems. The interrelationship between law and technology often focuses on one single aspect, the challenges that emerging technologies pose to the existing legal regime, thereby creating a need for further legal reform; however, the author argues, juridical measures involving technology both as a target of regulation and as a means of enforcement should take into account that the law does not merely respond to new technologies, but also shapes them and may affect their design (Elkin-Koren, 2006).
The work of Tim Wu adds layers to the conceptualisation of code’s relationship with law, moving from Lessig’s concept that computer code can substitute for law or other forms of regulation, to code as an anti-regulatory mechanism tool that certain groups will use to their advantage to minimise the costs of law – the possibility of “using code design as an alternative mechanism of interest group behavior” (Wu, 2003).Architecture and the future(s) of the internet
The current trajectories of innovation for the internet are making it increasingly evident by the day: the evolutions (and in-volutions) of the network of networks are likely to depend in the medium-to-long term on the topology and the organisational/technical model of internet-based applications, as well as on the infrastructure underlying them (Aigrain, 2011).
This is illustrated by what has been this author’s main research focus over the past few years: the development of internet-based services – search engines, storage platforms, video streaming applications – based on decentralised network architectures (Musiani, 2013b).
The concept of decentralisation is somehow shaped and inscribed into the very beginnings of the internet – notably in the organisation and circulation of data packets – but its current topology integrates this structuring principle only in very limited ways (Minar & Hedlund, 2001). The limits of the concentrated and centralised urbanism of the internet, which has been predominant since the beginning of its commercial era and its appropriation by the masses, are sometimes highlighted by the same phenomena that has contributed to its widespread success, as best illustrated by social media (Schafer, Le Crosnier & Musiani, 2011). Examples of incidents caused by “excessive concentration” are, for example, the global consequences of the Pakistani YouTube re-routing in 2008 or the repeated failures of Twitter infrastructure (e.g., in 2012). These incidents have put into the spotlight some of the possible limits of the concentration model: excessive control, technical and/or legal, by a single commercial entity; the opaqueness of the modalities of this control vis-à-vis the users; the vulnerability to single-point failures of centralised architectures.
While internet users have become, at least potentially, not only consumers but also distributors, sharers and producers of digital content, the network of networks is structured in such a way that large quantities of data are centralised and compressed within large data centers and server farms. At the same time, such data is most suited to a rapid re-diffusion and re-sharing in multiple locations of a network that has now reached an unprecedented level of globalisation. The current organisation of internet-based services and the structure of the network that enables their delivery – with its mandatory passage points, places of storage and trade, required intersections – raises many questions, in terms of the optimised utilisation of resources, the fluidity, rapidity and effectiveness of electronic exchanges, the security of exchanges, the stability of the network.
Beyond technology, these questions are deeply social and political, and affect the “ramifications of possibles” (Gai, 2007) the internet is currently facing for its close future. Resorting to decentralised architectures and distributed organisational forms, constitutes a different way to address some issues of management of the network, in a perspective of effectiveness, answer to vulnerabilities, digital “sustainable development” (better resource management), and of maximisation of the Internet’s value for society.Architectures shaping user rights: decentralisation and privacy by design
Systems based on distributed, decentralised, peer-to-peer (P2P) architectures seek their place today in an IT landscape that is mostly one of concentration and removal from users’ machines. From the viewpoint of informational data, personal data and exchanged content, this implies that sharing, regrouping and stocking those data in the most popular, and widespread internet services of today means promoting a model in which traffic is re-directed towards an ensemble of machines, placed under the exclusive and direct control of the service provider. Thus, exchanges between users are made by “copying” data that one wishes to share on one or more external terminals, or by giving these external machines the permission to index this information. The ways in which data circulates, is stored and written in these machines is often uncertain; moreover, the rights that the service provider acquires on such data are often excessive with respect to those maintained by the end user – in such a way that is often opaque for users themselves[www.nyccounsel.com] href="#footnote2_fg4gdo9">2.
When the operations of data treatment and handling are conducted, partially or totally, on users’ terminals directly linked together, this choice of network architecture contributes to building specific definitions of privacy protection. It modifies the ways in which the control on informational data, and the responsibility of their protection, are spread out to the users, the service providers and the developers who have created the service.
Three cases of internet services based on a decentralised network architecture – a search engine, a storage platform and a video streaming software, studied between 2009 and 2011 – have shown how a definition of privacy “by design,” more specifically by architectural design, takes shape in internet services (Musiani, 2013b). With this alternative, “techno-legal” way of defining privacy, a central role is attributed to the constraints and the opportunities of privacy protection that are inscribed into the technical model chosen by developers (Schaar, 2010).
Faroo, a P2P search engine developed first in Germany, then in the United Kingdom, displays a “six-levels” distribution model that must prevent the traceability of queries by a central entity; this model is supposed to preserve personal data within the user’s own terminal and the P2P client installed on it – unless they are encrypted on that very terminal before leaving it. This feature also allows the developers to work towards reducing the tension – which is a priori very difficult to eliminate – between the confidentiality of personal information and the personalisation of search queries, the latter being the “added value” that social dynamics add to the search engine and, which is based on the very collection of this personal information.
The case of Tribler, a P2P video streaming tool first developed at the Technical University of Delft (The Netherlands), is another occasion to follow this tension, as the logic underlying the system is that the history of downloads made by a user are shared by default with other users so as to nourish the software’s “recommendation” algorithm. The solution envisaged by the developers has, once again, to do with an idea of “privacy by architectural design”, as it builds on the decentralised and distributed model to mitigate, in the eyes of users, the impression of exposure and revelation of themselves that the system’s social features may provoke: not only can the feature be disabled, but it only sends the download history to other users – it doesn’t keep the information on any server controlled by the service.
Finally, Wuala3, a (formerly) distributed storage platform developed in Switzerland, displayed similar attempts to protect user privacy via architecture. The heart of this service was the user’s terminal, where, thanks to a dedicated P2P client, the operations of encryption and fragmentation of stored data could take place. These two operations, conducted before any other (e.g., sharing, downloading or circulating data in the network), were meant, in the vision of Wuala’s developers, as evidence given to the users that the service provider, regardless of its intentions, did not even possess the technical means to break user trust in the system.
While developers, across all three case studies, consider that a more articulate protection of privacy is one of the core comparative advantages of their systems (and they “sell” it as such), users wonder, in turn, about the implications of a decentralised architecture for the protection of their data. What does the fact of making available to the whole P2P network a part of one’s own computing resources imply, for the “invisible” data collected there? In the cases of Faroo and Wuala – where the P2P model merges, in a peculiar way, with a proprietary software logic, this question is the occasion to make explicit the difficult articulation between the decentralising philosophy subtending the systems, and a closed source code. Pioneer users – for the most part, users-innovators or users-developers themselves – see the closed code as a lack of transparency, even a lack of respect, that prevents them from delving into this aspect with the tools they have available. It is good to have privacy by architecture, these users point out, but we need to have a direct knowledge of this technique on a case-by-case basis, to, eventually, allow for direct modifications of the architecture.
Decentralised models challenge “by architecture” the extent, the balance and the very definition of the rights obtained by service providers on users’ personal data, vis-à-vis the rights that users maintain on such data. With a trade-off: on the one hand, the user sees her privacy reinforced by the possibility of an augmented control on her data, and its handling by the P2P client. However, simultaneously and for the same reasons, her responsibility for the actions she undertakes within and by means of the application is increased proportionately, as the provider surrenders voluntarily some of his control over the data and content present on the service. The collective dimension of this responsibility is also emphasised, inasmuch as the infraction to the collective behaviour has not only individual but collective consequences- be it the storage of inappropriate content, the introduction of unreliable information or spam in a distributed search index, or a “selfish” management of the bandwidth shared by a P2P streaming system.Conclusions: how architecture matters
“Arrangements of technical architecture have always inherently been arrangements of power,” writes STS scholar Laura DeNardis (2012): the technical architecture of networked systems does not only affect internet governance, but is internet governance. This governance by architecture, or “governance by design” (De Filippi, Dulong de Rosnay & Musiani, 2013), has important implications at a number of levels, of which the previous section has given but one example.
Changes in architectural design affect the repartition of competences and responsibilities between service providers, content producers, users and network operators. They affect forms of engagement and intéressement (Callon, 2006) in networked systems, of users first and foremost, but also of other actors concerned by the implementation and the operation of internet services. They shape the sustainability of the underlying economic models and the technical and legal approaches to digital content and personal data. They make visible, in various configurations, the forms of interaction between the local and the global, the patterns of articulation between the individual and the collective.
Changes in network architectures contribute to the shaping of user rights, of the ways to produce and enforce law, and are reconfigured in return. A number of legal issues, that go way beyond copyright (despite having often been reduced to this aspect, notably in the case of peer-to-peer systems), are raised by architectural configurations of internet services. To preserve the internet’s “social value,” it is important to achieve reliable forms of regulation – technical, political, or both – without impeding present and future innovation.
Changes in architecture do, finally, contribute to shift the boundary between public and private uses of the internet as a global facility: they are a crucial factor in defining intellectual property rights, the right to privacy of users/clients, or their rights of access to content. They contribute to define what is a contributor in internet-based services, in terms of computing resources required for operating the system, and of content.
In the end, technical architecture appears as one of the strongest, if not the strongest structuring element of internet governance: what is shaped into architecture and infrastructure can seldom be undone by institutional negotiation and dialogue alone, and institutions find it increasingly complicated to keep up with “creative” governance by architecture and by infrastructure4. In this sense, future evolutions of internet governance as a field would do well to take into account Michel van Eeten and Milton Mueller’s suggestion to expand and include innovative areas such as the economics of cybercrime and cyber security, network neutrality, content filtering and regulation, copyright enforcement, and interconnection arrangements among ISPs (van Eeten & Mueller, 2013).
In the digital world, it is possible to design in detail the architecture of the world users interact with – and as a consequence, it is possible to design the architecture of our global communication infrastructure in order to promote specific types of interactions over others (De Filippi et al., 2013). With important consequences for the ways in which the future internet will be governed, and for the extent to which its users will be not only customers, but citizens.Footnotes
1. Internet governance (IG) today is a lively, emerging field, and its definition relentlessly contested by different groups across political and ideological lines. A “working definition” of IG has been provided in the past, after the United Nations-initiated World Summit on the Information Society (WSIS), by the Working Group on Internet Governance – a definition that has reached wide consensus because of its inclusiveness, but is perhaps too broad to be useful for drawing more precisely the boundaries of the field (Malcolm, 2008): “Internet governance is the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet” (WGIG, 2005). This broad definition implies the involvement of a plurality of actors, and the possibility for them to deploy a plurality of governance mechanisms. IG has been described as a mix of technical coordination, standards, and policies (e.g., Malcolm, 2008 and Mueller, 2010). See also (DeNardis, 2013) and (Musiani, 2013a).
3. The decentralised mechanism subtending the Wuala system, a trade between local storage space and space in a “P2P storage cloud” spread out to the users, was discontinued in September 2011.
4. An example is the Domain Name System and its co-optations. See (DeNardis, 2012) and (Musiani, 2013).References
Agre, P. (2003). “Peer-to-Peer and the Promise of Internet Equality.” Communications of the ACM, 46 (2): 39-42.
Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), [https:]
Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.
Benkler, Y. (2004). “Sharing Nicely: On Shareable Goods and the Emergence of Sharing as a Modality of Economic Production.” The Yale Law Journal, 114 (2), 273-358.
Callon, M. (2006). “Sociologie de l’acteur-réseau.” In Akrich, M., Callon, M. & Latour, B. Sociologie de la traduction. Textes fondateurs. Paris : Presses des Mines, 267-276.
De Filippi, P., M. Dulong de Rosnay & F. Musiani (2013). “Peer production online communities, distributed architectures and governance by design.” Communication presented at the Fourth Transforming Audiences Conference, September 3, 2013, University of Westminster, London.
DeNardis, L. (2013). “The Emerging Field of Internet Governance”, in W. Dutton (ed.) Oxford Handbook of Internet Studies. Oxford: Oxford University Press.
DeNardis, L. (2012). “The Turn to Infrastructure for Internet Governance”, Concurring Opinions, 2012, [www.concurringopinions.com]
DeNardis, L. (2009). Protocol Politics. The Globalization of Internet Governance. Cambridge, MA: The MIT Press.
Elkin-Koren, N. (2006). “Making Technology Visible: Liability of Internet Service Providers for Peer-to-Peer Traffic.” New York University Journal of Legislation & Public Policy, 9 (15), 15-76.
Elkin-Koren, N. (2012). “Governing Access to User-Generated Content: The Changing Nature of Private Ordering in Digital Networks.” In Brousseau, E., Marzouki, M., Méadel, C. (eds.), Governance, Regulations and Powers on the Internet, Cambridge: Cambridge University Press.
Gai, A.-T. (2007). “Web 3.0: une autre branche pour l’arbre des possibles.” Transnets, [pisani.blog.lemonde.fr]
Latour, B. (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press.
Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books.
Malcolm, J. (2008). Multi-Stakeholder Governance and the Internet Governance Forum. Wembley, WA : Terminus Press.
Minar, N. & Hedlund, M. (2001). “A network of peers – Peer-to-peer models through the history of the Internet.” In A. Oram (Ed.), Peer-to-peer: Harnessing the Power of Disruptive Technologies, 9-20. Sebastopol, CA: O’Reilly.
Mitchell, W. J. (1996). City of Bits. Space, Place and the Infobahn. Cambridge, MA: The MIT Press.
Mueller, M. (2010). Networks and States: The Global Politics of Internet Governance. Cambridge, MA: The MIT Press.
Musiani, F. (2013a). “A Decentralized Domain Name System? User-Controlled Infrastructure as Alternative Internet Governance”. Presented at the 8th Media In Transition (MiT8) conference, May 3-5, 2013, Massachusetts Institute of Technology, Cambridge, MA. Available as draft at [web.mit.edu]
Musiani, F. (2013b). Nains sans géants. Architecture décentralisée et services Internet. Paris, Presses des Mines.
Rasmussen, T. (2003). “On distributed society: The history of the Internet as a guide to a sociological understanding of communication and society,” In G. Liestøl, A. Morrison & T. Rasmussen (ed.), Digital Media revisited : theoretical and conceptual innovation in digital domains, Cambridge, MA: The MIT Press.
Reidenberg, J. R. (1998). “Lex Informatica: The Formulation of Internet Policy Rules Through Technology.” Texas Law Review, 76 (3).
Schafer, V., H. Le Crosnier & F. Musiani (2011). La neutralité de l’Internet, un enjeu de communication. Paris: CNRS Editions/Les Essentiels d’Hermès.
Star, S. L. (1999). “The Ethnography of Infrastructure.” American Behavioral Scientist, 43 (3): 377-391.
van Eeten, M. & M. Mueller (2009). “Where Is the Governance in Internet Governance?” New Media & Society, 15 (5): 720-736.
van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press.
Working Group on Internet Governance (2005). Report of the Working Group on Internet Governance, Château de Bossey, June 2005, [www.wgig.org]
Wu, T. (2003). “When Code Isn’t Law.” Virginia Law Review, 89.
As of today, Innovative Auctions has successfully completed private auctions for a total of 18 contested generic Top-Level Domains, resolving contention for more gTLDs than all other mechanisms combined — including straight-up withdrawals, private negotiations, and swaps.
Here is this week's update:
Whatbox, another first-time participant, will receive payments for withdrawing its applications for .fish and .discount.
Donuts won the other three contention sets auctioned this week — .lawyer, .discount, and .fish — and will be compensated for withdrawing from .website.
Finally, Top Level Domain Holdings (TLDH) will receive payments in exchange for withdrawing their applications for .website and .lawyer.
Participants in this week's auctions commented on the "smooth process" (Bhavin Turakhia, Radix) and "well-performing software" (Fred Krueger, TLDH) and all bidders expressed interest in participating in future auctions.
Applicant Auctions are facilitated by Innovative Auctions Limited, and independently audited by Deloitte. Further Applicant Auctions will be held as applicants request them, on mutually agreed upon schedules — the next ones will likely take place in December. We look forward to congratulating participants from this auction in person and connecting with more applicants in Buenos Aires.
Written by Sheel Mohnot, Project Director, Applicant Auction
Follow CircleID on Twitter
More under: Top-Level Domains
A lot of people (including me) are pretty upset at revelations of the breadth and scale of NSA spying on the Internet, which has created a great deal of ill will toward the US government? Will this be a turning point in Internet Governance?
No, smoke will continue to be blown and nothing will happen.
Governments are not monolithic. What people call Internet governance is mostly at the DNS application level, and perhaps the IP address allocation. The NSA is snooping down in the tubes, the underlying networks, and servers located in the U.S., where none of this matters. They do have a few DNS based attacks, but they'd work the same way regardless of who was running the real DNS servers.
Brazilian president Dilma Rousseff addressed the UN:
Rousseff called on the UN oversee a new global legal system to govern the internet. She said such multilateral mechanisms should guarantee the "freedom of expression, privacy of the individual and respect for human rights" and the "neutrality of the network, guided only by technical and ethical criteria, rendering it inadmissible to restrict it for political, commercial, religious or any other purposes.
This is what is known in technical circles as a crock. Nation states can and will spy on any traffic that passes through their territory. This shouldn't come as any surprise to people who are familar with, say, the history of World War I. (See Telegram, Zimmermann)
One detail that seems to elude a lot of the governance crowd is that the Internet is designed so that everything is voluntary. If you want to force networks to do stuff they are not inclined to do, the only modes of influence are threats of disconnection, or for networks within a specific country, legal pressure from their own government.
The countries that make all the noise have zero leverage over US networks because their networks have far more to lose than we do if they disconnect, both because so much content is hosted in the US, and because so many transit routes run through the US.
When I was at the ISOC/ITU/OAS spam day in Mendoza last week, I was talking to a guy who worked for a large Internet vendor. He told me that the pricing within Brazil is still so screwed up that it's often price competitive to buy circuits to Miami and peer with other Brazilian and South American networks there. As far as content neutrality, it's still in pretty good shape on long haul circuits, although I expect "neutrality" in the speech above is code for we don't want to pay the whole cost of circuits to Miami.
If Brazil wanted to stop US spying on their traffic, they could fix their domestic telephone prices and build a few domestic Internet exchanges, so their networks all exchanged traffic directly with each other, and with other South American networks, rather than via Miami. This would not be particularly expensive, although it would make the de facto telephone monopoly unhappy.
If Brazil built more submarine cables that went other places than the US, e.g. Africa and Europe, which would be a good idea for redundancy and shorter transit times, they'd probably be spied on less by the US, and more by whoever is at the other end of the cables. Someone commented that cables are expensive, but so are football stadiums.
Perhaps someday they'll have robust enough networks to route directly rather than through the US and enough going on other places to provide the content their users want without fetching it from the US, but building that is expensive. In governance discussions, spending one's own money has always been beyond the pale.
Written by John Levine, Author, Consultant & Speaker
Follow CircleID on Twitter
Мое интервю, публикувано във в. “Галерия” на 10-и октомври.
Заглавието и подчертаните моменти са мои. Въпросите и
отговорите са всъщност от 29.09.2013 г., затова някои актуални
моменти може и да липсват. Оригиналното заглавие на вестника е
“Не помня баща си”, с подзаглавие: “Бях едва на 7 години, когато
той си отиде.”
- Г-н Марковски, Вие остро критикувате Цветан Цветанов. На 10 октомври той влиза в съда. Вярвате ли, че обвинението ще се докаже?
- Нямам представа – не съм чел обвинителния акт, не знам кои прокурори са ръководили разследването, дали са попаднали на хора в МВР, които да съберат годни доказателства. Съдът ще прецени.
Но аз критикувах Цветанов не за друго, а защото той е един от основните рушители на държавността в Република България. Ако друг министър беше чел СРС-та в Парламента, щях него да критикувам; просто останалите се оказаха не чак толкова прости.
- Сотир Цацаров успя ли да оправдае общественото доверие?
- Знаете ли, общественото доверие е нещо променливо. По-важното е да оправдае юридическото си образование и да свърши някаква полезна за народа работа. Казвам „някаква“, защото той няма възможност да свърши всичко. У нас обаче
народът гледа на прокуратурата като на книга за оплаквания,
каквито навремето имаше по заведенията за обществено хранене. Сякаш с оплакването ще се промени качеството на храната. Прокуратурата, разбира се, също „подгря“ очакванията на хората, но главният прокурор може и трябва да излезе публично… при това да го прави всяка седмица… и да заяви, че с тази система и с тези на работа в нея, не бива да очакваме чудеса. Иначе ще загуби всякакво доверие – както се случи преди него, с Борис Велчев. Малцина помнят с какви надежди бе избран. Разбира се, решенията на ВСС също навяват мисли за „нагласен мач“ и не съм много убеден, че с тези стари кадри ще може да се изгради нова и по-справедлива правосъдна система. Най-лесно ще можем да съдим за главния прокурор според резултатите от работата му, а именно проверките и делата срещу бившия министър Цветанов, но още повече
ще има ли образувани проверки и повдигнати обвинения срещу бившия премиер.
- Защо Борисов не гони Цветанов от ГЕРБ въпреки многобройните скандали покрай него?
- А дали не е защото никой друг не му е толкова верен? Я вижте какво стана с един от близките му министри, Миро Найденов – предаде го веднага. Хем съвсем наскоро му беше спаринг-партньор, ако си спомняте визитата им в Япония, когато Борисов прилагаше хватки на Найденов пред погледите на изумените домакини.
- В тефтерчето на Филип Златанов освен И.Ф имаше още Ц.Ц и Б.Б. Защо само Фидосова опра пешкира?
- Е, и на Цецка Цачева ще дойде времето, спокойно. По-важното е, че времето на Филип Златанов изтече. Така ще изтекат и останалите, които се мислят за вечни. Сигурно вече сте забравили за корицата на луксозно списание, на което бяха изтипосали снимка на Цачева – снимка с толкова много компютърна обработка и грим, че чак на нея самата ? стана неудобно в някакъв момент… Представяте ли си – на български политик да му стане неудобно?!
- Вие не веднъж сте казвали, че ” Цветанов е единственият политик, освен Кадафи, обявил български лекари за убийци на деца”. Сега той се извини на медиците. Това достатъчно ли е?
- Нещо сте се объркал – той НЕ се е извинил на лекарите. Точно както и Кадафи не им се извини – осъди ги на смърт, а после ги прати в България, така и Цветанов го осъди от трибуната на парламента. Ето какво е казал той: “Аз се извинявам на тези лекари за това, което действително съм им причинил с това оповестяване на тези СРС от българския парламент” и още: “Не бих казал, че съм направил извод, че са убийци, но да приемем, че съм казал нещо подобно. Аз се извинявам на тези лекари, действително, след като прокуратурата е счела, че няма достатъчно доказателства, за да продължи нататък
Ако вие виждате тук едно безусловно поднесено извинение и молба то да бъде прието, аз не го виждам. Той казва „Аз се извинявам“ – т.е. самичък си се извинява, след което прави и уговорката, не защото са невинни, а защото прокуратурата „е счела, че няма достатъчно доказателства“. Разбирате ли – от тези фрази,
човек не може да не остане с впечатлението, че Цветанов продължава да ги мисли за виновни,
но просто не е имало достатъчно доказателства. Между другото – знаете ли кой събира доказателствата? Да, именно – МВР, по времето на министъра Цветанов.
Истинското извинение след такава трагедия, би трябвало да е поднесено безусловно, смирено, не по телевизията, а лично – на крака, в Горна Оряховица. Даже не на крака, а на колене.
- Защо блокирахте всички ваши “приятели” в страницата ви във Фейсбук, които подкрепят ГЕРБ?
- Не ги блокирах, а ги призовах да не ми посещават страницата, за да не се дразнят. А онези, които идваха, за да рекламират дейността на Цвинокио и Пинокио, помолих да спрат, за да не бъдат лишени от възможността не само да пишат, но и да четат – което се случи с онези от тях, които не спряха и бяха махнати от списъка с Фейсбук-приятели.
- Някои от тях ви обвиняват, че обслужвате БСП и че налагате цензура на инакомислещите?
- Е, винаги ще има хора, които няма да могат да приемат истината – че са били не инакомислещи, а зле възпитани. Аз не съм политик и нямам нужда от рейтинг, така че мога да си позволя разкоша да наложа правила за поведение на моята страница във Фейсбук. Цензура няма – всеки е свободен да си изкаже мнението на собствената си страница.
- Сега управляващите готвят нов Изборен кодекс. Има много спорове дали българите в чужбина трябва да имат право да гласуват. Вашето мнение по въпроса?
- Аз отдавна съм призовал управляващите да осигурят електронно гласуване, не само за българите в чужбина, а за всички българи. Технически е възможно, не струва много пари, има страни, в които гласуват така от години.
- Електронното гласуване ще спре ли манипулациите на изборите?
- Манипулации винаги ще има – това е в природата на човека, така че няма да ги спре. Но ще ги намали или ще направи манипулациите извън цифровия свят с много по-малко значение за изхода от изборите.
- Има ли реформатори в Реформаторския блок?
- Е, как да няма? Сред тях има поне един истински европейски политик – Меглена Кунева, която за съжаление не влезе в сегашния Парламент. Тя е човек, който се е доказал като можещ да поема отговорност, да помага на хората – изобщо, все неща, които изглежда я правят… непригодна за нашия Парламент, уви…
- Вие се радвате на голямо обществено доверие. Обмисляте ли да влезнете в политиката? (Имал ли сте покана от някоя партия или искате ли да се присъедините към някоя)
- Не знам защо мислите, че се ползвам в някакво голямо обществено доверие. Че ме четат (много?) в Интернет – това е факт, но аз не пиша, за да събирам почитатели, а защото това е моят вътрешен свят, моят същност – трето поколение писател съм все пак.
Имал съм покани, но съм ги отказвал по ред причини, а най-важната в момента е една: аз имам международен опит и кариера от над десет години, които не са годни за вътрешнополитическа консумация. Помощта ми за България може да е само извън страната – там, където е силата ми. Ако има амбициозни управляващи, би трябвало да използват всички хора като мен – българи по света, които няма да се върнат, но ще помагат отвън.
- И малко по-лични въпроси. Едва на 7 години оставате без баща си. Какво си спомняте от него?
- Почти нищо, за съжаление. Познавам го повече по спомените на приятелите му и разказите на роднините ми, както и по книгите, които е написал.
- Трудно детство ли имахте?
- Не, но за съжаление прекалено рано се разочаровах от системата. Някой сега може да се пита защо това да е за съжаление, но помислете тийнейджър, който пише и публикува разкази, някои от които не можеха да бъдат отпечатани, защото звучаха писани срещу системата, срещу социализма, а от друга страна издавах три години вестник, който едва след промените разбрах, че на Запад бил категоризиран като дисидентско издание. Какво дисидентско издание от ученици? Просто пишехме истини – че има издевателства в училище, че казармата е загуба на време, че ръководителите на държавата си раздават ордени помежду си, че войниците на фронта през Втората световна не са викали „За Родината, за Сталин“, когато са били изпращани на заколение, а са плачели „Мамо!“ и „Боже“.
- Защо след промените заминахте за САЩ?
-Така се стече животът, не съм го планирал. Донякъде заминах и защото в България бях постигнал всичко, което човек можеше да постигне в страната – имах работеща фирма, плащахме си данъците, занимавахме се с обществена дейност с група приятели и съмишленици. Направихме така, че Интернетът в България да е свободен от регулиране и държавен контрол и последиците от тази ни дълга битка са особено видими днес, когато хората масово имат бърз и евтин достъп до Мрежата. В кръга на шегата, след като помогнах България да стане свободна и демократична държава, отидох да помагам на други страни да са такива.
- След 24 години демокрация, смятате ли, че България върви напред, в правилната посока?
- България няма голям избор, защото целият ЕС върви напред, а ние сме направили избор – къде „со кротце, со благо“ (доброволно), къде „со малце кютек“ (под натиск от ЕК), но се движим. Проблемът не е с посоката, а със скоростта и с това, че у нас има прекалено много хора, които гледат с носталгия назад и мислят, че социализмът е бил нещо хубаво.
- Но нима отричате достиженията на социализма? Социална сигурност, достойни пенсии и т.н.?
- За какви достижения и каква достойна старост говорим? Да не би чушкопекът да е измислен днес? Не, по времето на социализма е измислен. Да не би най-търсената и дефицитна стока тогава да не бяха капачките за буркани? Самата дума „дефицит“ и колежката ? „връзки“ може да не говорят много на младото поколение, но нека им кажем, че на времето мислехме, че бананите растат само през зимата, само през декември, защото в магазините можеше да ги купим дни преди Нова година. И че имаше ред и опашки, за да си купи човек кола, телевизор, апартамент. При това опашките за коли бяха по 10-15 години. ГОДИНИ. А човек можеш да си купи апартамент само в града, в който има жителство, т.е. не можеше да решиш и да отидеш да живееш в някой друг град ей така, защото ти харесва. Мога да продължа още няколко часа да говоря по тази тема, но няма смисъл – ако аз ви казвам, че социализмът се оказа не „недоносче“, както го нарича в един таен свой доклад Тодор Живков през 1988 г., а просто измислен от Ленин и Сталин, които не са си направили труда дори да го пробват на кучета, за да видят става ли или не за хората.
- Политиците или обикновените хора са виновни за положението, до което се докарахме?
- Виновният е едни и всеки от нас може да го види в огледалото. Всичко останало е опит да прехвърлим личната си отговорност на някого другиго.
- Ще се върнете ли в България?
- Аз не съм я напускал. В днешния свят човек не е длъжен да бъде някъде физически, за да се чувства част от него. Нима като гледахме мачовете от Световното по футбол през 1994 г., когато нашите играеха в САЩ, разстоянието ни пречеше да се чувствахме българи? Повечето емигранти, които познавам, не са напуснали страната, за да са далече от България, а за да са далече от българското – злобата, завистта, омразата. С тези чувства, които избуяват у нас периодично и бързо, като плевели, нищо хубаво никой не е успял да постигне.
Gurstein's Community Informatics: The Open (Internet) Society and Its Enemies: Can Multistakeholderism Survive “Information Dominance”?
Many law firms and Intellectual Property departments in charge of managing brands and domain names for their customers or businesses must have had that same question: "how do I protect a brand online under the ICANN new gTLD program?" The first potential answer that is usually offered up to an enquirer is: "the Trademark Clearinghouse does that".
As time goes by, and the rules under which the Trademark Clearinghouse operates are better defined and understood this answer becomes clearly fallacious.
The Trademark Clearinghouse (TMCH) is not protection mechanism; rather it is a privileged access to registering domain names prior to registry launch.
In other words, the Trademark Clearinghouse does two things for brand owners that have registered their marks in the TMCH:
- It allows you to participate in a Sunrise Period to register your trademarked brand as a domain name;
- It informs you if someone else has applied to register your exact mark as a domain name for up to 90 days following the start of the Sunrise Period.
Importantly, there is one thing the Trademark Clearinghouse absolutely does not do: namely prevent a third party from registering your mark as a domain name during Sunrise, Landrush or General Availability.
It is important to note, that registering in the Trademark Clearinghouse has a cost (the Trademark Clearinghouse charges $150 per to lodge a registered mark for 1 year) budget at least $200 if going through an ICANN accredited registrar; there is a cost also to registering a domain in a Sunrise Period (the sunrise fee varies from registry to registry). Therefore, the more marks you register in the Trademark Clearinghouse, the more expensive it will be. The more domain names you want to guarantee securing from being registered by a third party, the more domain names you must register in Sunrise Period.
Are brand owners prepared to lodge all their protected marks in the TMCH so that they can defensively register them during sunrise in all available Top Level Domains? It is expected that there will be over 600 commercial sunrise periods where mark owners can register their marks as domain names before the first-come-first served period of General Availability commences. Brand owners must make commercial decisions about a) which if any of their protected marks they should register as domain names in Sunrise b) under which Top Level Domains they wish to register their marks and c) whether or not register non-trademarked but brand specific terms (e.g. www.cheapbrandname.shoes) defensively in Landrush/General Availability phases or to rely on post registration dispute procedures to enforce brand owners' rights.
The new but yet untested Uniform Rapid Suspension process
The Uniform Rapid Suspension (URS) process is a new and much vaunted possible solution to the shortcomings of the older but well tested Uniform Dispute Resolution Process (UDRP) but these processes only operate after the infringement has occurred, when the domain name has already been registered by a third party. It is important to note that undertaking a UDRP or a URS procedure is no guarantee that you will recover a domain name. The costs associated with recovery are typically many times more than the cost of registering the domain name in the first place.
URS will have the benefits of being considerably cheaper and quicker than UDRP. However, URS has never been operated before and there are sure to be a number of corner cases to resolve and a considerable time delay before wholesale adoption by the brand owner community.
Costs associated with UDRP start at $1,500 but are typically nearer $5,000 excluding case preparation fees and other legal costs and with a resolution timeframe of at least 45 days.
Due to the newness of the process, the real costs associated with URS have not yet been able to be measured however the filing fee is estimated to be no more than $500 excluding the case preparation fees and other legal costs, which may be between 2 and 10 times the filing fee depending upon the complexity of the case. Resolution timeframe is expected to be in the region of 20 days.
At very best this indicates that the cost to recover a single domain would be in the region of $1,500. The cost to register a single domain in Sunrise varies from registry to registry but is likely to be in the region of $100 to $150.
If such procedures aren't proven or you can't justify their costs, what other solutions does a trademark owner have to limit brand risk and to guarantee the right level of brand protection?
An alternative to guarantee brands a protection does exist
Some new gTLD applicants have analysed ICANN's proposed Right Protection Mechanisms (RPMs) and have created additional RPMs to help support brand owners' rights and provide a more complete protection suite for brand owners such as:
- Domain Name Blocking – allows registrants to purchase a non-resolving domain name based upon proof of trademark. While blocking sounds like a reasonable mechanism to stop others registering domain names which contain protected terms this is purely a defensive mechanism and will not contribute to routing web traffic to your website, nor will it allow the brand owner to realise or grow the asset value of a domain name.
- Pre-reservation – allows rights owners to pre-order a group of domains in Sunrise which contain the trademarked term e.g. www.cheapTRADEMARK.book, www.TRADEMARKsecurity.shop or www.TRADEMARKbirmingham.music. These domains will be registered at the start of the General Registration period, and will typically offer a discount on the retail price of a domain in General Availability.
These mechanisms in particular, mean it becomes possible to largely prevent cyber squatters, typo squatters, counterfeiters, detractors and other unscrupulous third parties from registering critical domain names containing your trademarked brand.
The advantage of such solutions is not only their cost effectiveness when compared with after-the-fact rights protection mechanisms such as URS and UDR but also their simplicity:
- You can buy a discounted pack in sunrise covering multiple gTLD extensions for your trademarked name.
- You can buy a discounted pack of second level domains which contain your trademarked term.
- Your job is done: your brand is secured on a complete set of new domain name extensions and no one can come and abuse domains names similar or close to your brand.
In terms of pricing and logistics here is why such solutions should be highly considered by brand owners:
- The minimum cost of lodging a URS procedure is around $1,500 for one domain name;
- Registering in the Trademark Clearinghouse + Sunrise Registration for one domain name:
- Registering one brand in the Trademark Clearinghouse costs approximately $200 (at the service provider);
- Registering one domain name in a Sunrise Period costs approximately $130;
- Total: $330 for one domain name in a Sunrise Period.
- Pre-ordering say 100 domain names containing a trademarked term registered in Sunrise will cost say $10 per domain.
- The cost of registering 101 domains defensively will be approximately $1,330 compared to $1,500 for the cost of one URS action.
- Putting microsite webpages on 101 domains will invariably drive traffic to core website.
- Search engines can index and rank 101 websites pushing down other less authentic domains in the listings, further reducing brand risk.
- The asset value of domain names will increase rapidly with the right combination of content, indexing and promotion.
It is too early to quantify exactly what the costs will be to solve abusive domain name registrations under the new gTLD program. However, it is fair to say that:
- URS is likely to cost a minimum of $1,500 per action and multiple actions are likely to be necessary.
- URS is unproven; success is not guaranteed and URS does not create added value to brand owners.
- Domain Name Blocking may provide a partial answer to online domain abuse.
- Defensive domain name registration can be cost effective, traffic generating and can increase domain name portfolio asset values.
Written by Jean Guillon, New generic Top-Level Domain specialist
Follow CircleID on Twitter
The reaction to last weeks announcement from the leaders of the “I* organizations” (ICANN, the RIRs, IETF, IAB, W3C and ISOC) on the future of Internet governance has been overwhelming. Judging from the 90,000+ visits to the IGP blog’s brief analysis of the situation, there is a global groundswell of interest in its implications.
We suggested last week that the USG has lost its chance to lead the transition away from its unilateral oversight of ICANN. The I* orgs, in alliance with at least one like-minded government (Brazil), have shrewdly positioned themselves to do so. However, the details about how such a transition would occur are absent. What would a newly independent ICANN look like? How would it be held accountable to its stakeholders? How will we get there? It is these details which should be on the agenda of the highly anticipated meeting in Rio this coming spring.
We respectfully suggest that the Rio meeting must not be organized as a parade of ”leaders” on a podium purporting to speak for the public. Let the meeting be open to anyone and everyone with a serious stake in the accountability of ICANN and its relationship to the U.S. and other governments. Let it have an open process for submitting, deliberating upon and expressing support for or opposition to specific proposals. Let us also not forget that ICANN and its oversight are the main topic of the meeting, which suggests that ICANN’s staff should not be playing a major role in setting the agenda for the meeting; ICANN has a bit of a conflict of interest in that regard. We must not allow ICANN to use its escape from the USG to escape all accountability. Ideas should be solicited widely, not from an assemblage of leaders hand-picked by the institution being governed. Let any dialogue on transition emerge openly from civil society, industry and governments and let them determine their own representation in the Rio event. That is the essence of “bottom up, multistakeholder” governance.
In this spirit, we would put forward as part of this dialogue the IGP’s own comments filed in 2009 in the Department of Commerce proceeding on the future of the IANA contract. We were already advocating the “globalization of the IANA function” back then, and developed a set of specific and executable steps that can be taken. Our comments argued that the strategy of achieving global governance and coordination through private contractual approaches could still work, provided that the proper legal and institutional framework is in place. To achieve that framework, an international agreement is needed that accepts and recognizes ICANN’s status as a public institution that provides global governance. This instrument can provide lawful constraints on its mission and adequate checks on the abuse of its authority. But the instrument should be seen not only as a way of checking or limiting abuses by ICANN itself, but also as a way of limiting interference in ICANN by governments (both the U.S. and others). Governments should be involved not as “oversight” authorities or “public policy makers” but as backers of a set of impersonal rules that ensure certainty and accountability, and give the global community of Internet users a legal basis for settling important disputes.
An international agreement along these lines should have the following elements:
- The nongovernmental status of ICANN should be affirmed and formalized, as a protection against takeover by governments.
- The sovereignty of national governments over ccTLDs should be formally recognized, and authority over their delegation ceded from ICANN to national governments using a formal, secure and verifiable process. However, the instrument should also recognize the right of Internet users to access and register under global TLDs so as to avoid monopoly.
- There should be a prohibition on using ICANN for content regulation or other violations of the right to freedom of expression; the instrument should also create a right of private parties to initiate legal challenges to ICANN actions on these grounds.
- The agreement should ensure the consistency of economic regulation of DNS and IP addressing with antitrust and nondiscriminatory trade principles (consistent with its current mandate to increase competition); here again, there should be a right of private parties to initiate legal challenges on these grounds.
- Selection of an appropriate body of national law under which ICANN should operate. If California Nonprofit Public Benefit corporation is deemed the best option, then its membership provisions need to be rethought and reapplied to ICANN in a way that does not permit it to evade accountability and substitute open ended “participation” for binding rights and obligations vis-à-vis its members.
- The GAC should be dissolved and ICANN’s Supporting Organizations opened to participation by individuals from governments and their agencies.
- Providing a legal foundation for ICANN as described above would allow for the dissolution of the existing IANA contract when appropriate.
We invite serious comment on the feasibility of this framework. We would encourage civil society actors in particular to converge on a common approach to the transition.
Two ccTLD signals should get more attention when we're talking about the domains' benefits. Companies in emerging markets can signal their brands to expats and/or westerners. This ability to take the companies' appeal beyond their immediate, national markets deserves a look and some appreciation.
Traditionally, Western companies have been the ones who registered ccTLDs to signal operations in overseas markets, while companies in emerging markets use them to signal their local brands to the local market.
I'm not talking about the names most likely to come to mind when we think about emerging-market companies that compete globally. Not the Haier, China's multinational consumer-electronics and home-appliances company. Not Tata of India, with its empire of companies ranging from agrochemical specialists to retail chains. Not Natura, the Brazilian company that's a leading manufacturer and marketer of perfume, cosmetics, beauty products, household products, and personal-care, skin-care, and hair-care products. I'm talking about lesser-known companies that have struck to this particular strategy: following their countries' emigrants to new homes around the world. One example is the fast-growing South African casual dining chain Nando's, which has opened outlets in Australia, Canada, the UK, and 22 other foreign countries, where large numbers of South Africans live. Another is the Saudi Arabian fragrance retailer Arabian Oud, which has set up 620 stores in 33 countries, including the UK and France. The United States has 32 million Mexican-Americans; Germany, 4 million Turks; and the United Kingdom, 3 million South Asians. They present opportunities for countries from back home who want to explore a First World market.
As far as vast new markets are concerned, Africa's $2 trillion economy is growing faster than that of any other continent. About a third of the 54 African countries are seeing annual GDP growth of more than 6%. This isn't just about diamonds and oil: Only 24% of the growth from 2000 to 2008 was attributable to natural resources.
The expected growth in emerging markets will force Western and local companies to expand their ccTLD signaling for their brand names and relevant generics. This will increase competition, which will push higher the prices of ccTLDs.
Written by Alex Tajirian, CEO at DomainMart
Follow CircleID on Twitter
Jari Arkko – Pervasive Monitoring and the Internet (RIPE 67 event in Athens, Greece)
Click to WatchToday at the RIPE 67 event in Athens, Greece, IETF Chair Jari Arkko gave a presentation on "Pervasive Monitoring and the Internet” where he spoke about the ongoing surveillance issues and:
• What do we know?
• What are the implications?
• What can we do?
Similar to his earlier article on the topic, Jari looked at the overall issues and spoke about how Internet technology should better support security and privacy. He mentioned the steps the technical/operator community can take — and mentioned the upcoming events and discussions happening at IETF 88 in Vancouver in November.
The video of Jari's presentation is now available for viewing, as are Jari's slides outlining these issues. Jari's presentation goes for about 18 minutes and then is followed by about 20 minutes of questions from the audience.
If you would like to get more involved in the technical discussions around how to address these pervasive monitoring issues, please see my earlier article where I provide some links and context for the "perpass" mailing list.
Written by Dan York, Author and Speaker on Internet technologies
Follow CircleID on Twitter
The International Chamber of Commerce (ICC) has announced that greater efforts to bring about better, more consultative global policy-making are needed to maximize the potential of the Internet to power future economic growth. ICC BASIS (Business Action to Support the Information Society) plans to use its presence at the 8th annual Internet Governance Forum (IGF), taking place in Bali, Indonesia, between 22-25 October, to call for attention to a greater collaboration between stakeholder groups and stronger pro-growth international policies in order to help the Internet retain its place as the world’s primary economic enabler.
Follow CircleID on Twitter
ICANN has announced a list of over 40 diverse practitioners, subject matter experts, and thought leaders as members of the ICANN Strategy Panels to support development of the organization's strategic and operational plans. ICANN Strategy Panels, according the organization, is aimed to serve as an integral part of a framework for cross-community dialogue on strategic matters.
List of members announced include internet pioneers, Paul Mockapetris (Inventor, Domain Name System), Paul Vixie (CEO, Farsight Security), Vinton Cerf (VP and Chief Internet Evangelist, Google) and Tim Berners-Lee (Director, World Wide Web Consortium).
Below is a video interview with Theresa Swinehart, Senior Advisor to the President on Strategy:
Follow CircleID on Twitter
I often think there are only two types of stories about the Internet. One is a continuing story of prodigious technology that continues to shrink in physical size and at the same time continue to dazzle and amaze us. We've managed to get the cost and form factor of computers down to that of an ordinary wrist watch, or even into a pair of glasses, and embed rich functionality into almost everything. The other is a darker evolving story of the associated vulnerabilities of this technology where we've seen "hacking" turn into organised crime and from there into a scale of sophistication that is sometimes termed "cyber warfare". And in this same darker theme one could add the current set of stories about various forms of state sponsored surveillance and espionage on the net. In this article I'd like to wander into this darker side of the Internet and briefly look at some of the current issues in this area of cybercrime, based on some conferences and workshops I've attended recently.
There is little doubt that the Internet has been populated in many diverse ways, and at the same time as we see a proliferation of online services for users we also see proliferation of attacks on these same services and users. Some attackers want to relieve a victim of their money, while other attackers head into areas of disruption, intelligence collection and espionage. No doubt there are many other motives at play as well. At the same time as the variety and efficacy of such attacks escalate, there is a level of frustration that our legal framework is not keeping up to date in classifying such acts as criminal activities. Also that our law enforcement and intelligence agencies are unable to undertake their roles and offer us those basic protections under law that we have come to simply assume in the physical world.
Just how different this cyber-world can be was highlighted in a recent presentation at a cybercrime conference from Fox-IT, the private Dutch organisation that, notably, performed the forensic examination of the consequences of the Diginotar certificate compromise. This presentation started with the proposition that, in the area of cybercrime, Law Enforcement Agencies (LEAs) are effectively blind. They are unresponsive to small scale issues, and while this may lead to a call for increased presence and surveillance in some quarters, there is also the sensitivities of state-based online surveillance that hinder acceptance of the value of such increased LEA surveillance online. And while larger organisations have the wherewithal to operate a competent security function, public users enjoy no such facility. The legislative framework is dated, given that it is constructed on a premise of a physical world, physical criminal actions and physical evidence. In addition, there is an acute skills crisis in this sector. These considerations have lead to the rise of private cyber investigation entities, such as Fox-IT, Crowdstrike and Mandiant. The motivations of the private entities in this space are quite different to the LEAs. The victim is the focal point, not the crime. The criminal act is more important, the underlying trend and catching the perpetrator(s) is less important than mitigating the damage caused to the victim and preventing its recurrence. The private investigation is not hampered by physical borders and does not necessarily require physical presence. Are private investigations more effective than traditional public policing? Probably not. Indeed its probably the case that their motivations are different enough that direct comparisons are not all that useful. Online criminals run a minimal risk of discovery and capture from private investigators, as these private investigators generally have no overarching motivation to catch the criminal per se. This leads to a problematic mismatch in this space. Our common dependency on the Internet continues to grow — and grow rapidly — but our security capabilities lag far behind and this lag is getting worse. In the public sector legislation, skills and methods all need attention. The private sector response has filled the needs of some individual entities who can afford to use their services. But such activities do not address the broader aspects of the vulnerabilities in our environment that facilitated the criminal act, nor are they focussed on the apprehension of the criminal.
A related story comes from McAfee which takes the position that these days cybercrime is a service-oriented business enterprise. The presentation commenced from the notion that the criminal enterprise is a service enterprise where competitive pressures are dropping barriers to entry such that the instruments of an effective cyberattack can be outsourced completely. The presentation pointed out the capability to purchase online zero day vulnerabilities, email lists and target addresses. In terms of cybercrime product development, you can now purchase exploits and the testing of malware against current anti-virus software. The service providers in this activity now gather reputation and credibility in the same way as online vendors in eBay. Some spam providers evidently provide a chat window and a 24 hour help desk. In this highly competitive market for cybercrime services, there is also a price slump, and bot armies can be leased for nominal sums. Similar services exist in the hacking space where email passwords can be cracked for a fee, as can credit cards. This is spreading into currency where Bitcoin is evidently widely associated with cybercrime. The volume of this activity is increasing, the severity is increasing and the LEA response is largely non existent.
Trend Micro has looked at the pervasive use of inline apps and services and the scale of the risk that cybercrime represents to our society. It has produced a work which one could call a speculative near future reality TV or a webcast drama on the theme of pervasive nature for generation and use of personal information — and the potential ways this could be perverted. Microsoft operates a data collection subsystem within a larger framework called "Operation b54" which is being used as a means to gain some real time visibility into botnet formation and operation. At the same time as the botnet proliferation, Microsoft reports that they are seeing the increased use of TOR, Bitcoins, and VPS solutions where the technically savvy users are adept at burying themselves and making online anonymity commonplace.
The LEA sector is facing considerable challenges at this point in time. One view from within the LEA sector is that the level of such criminal activity is escalating at unprecedented levels while the resources and skills available to the LEAs continue to be woefully inadequate. The LEAs appear to be lagging behind the privateers and the privateers are rapidly innovating at a pace commensurate with their criminal counterparts. This keeps LEAs constantly back-footed despite protestations to the contrary. There is a similar story with some of the Computer Emergency Response Teams (CERTS) and the LEAs where there are claims that the relationship between the CERT teams and the LEAs is not working as well as it could. And we hear from some CERTS that the LEAs are unresponsive and not overly cooperative — and a similar claim is heard in the other direction. Again, there is a difference of motivation where the CERT is strongly motivated to assist its clients — potential or actual victims — while the LEA's focus is often directed to the crime itself and its perpetrators.
In the meantime, over on the Internet, things are not exactly getting any better.
These days the current picture of SPAM is that it overwhelms the "real" email traffic. Some spam fighting groups have claimed that spam outnumbers genuine mail by a factor of 100:1. Whatever we could claim as the success of the various responses of blacklists, mail filters, reputation services, certification and regulation, the basic observation is that the escalating volume of spam has managed to readily out scale the effectiveness of the responses. At the same time we've now managed to imbue IP addresses with "reputation" and its difficult to understand whether this a net positive — the action certainly has its downsides. Many readers would be aware of the various forms of blacklists that list the "disreputable" IP addresses. These blacklists enumerate the IP addresses of hosts that have been observed to emit spam, control a botnet, host phishing web sites, or any of a number of related nefarious activities. Once an IP address is listed in one of these lists, many other systems will not communicate with it. It's often claimed that it is extremely easy to get an IP address into one of these blacklists but once listed it's very hard to get it off. Jane Austen may well have been talking about IP blacklists when in Pride and Prejudice she had Mr Darcy confess that "my good opinion once lost is lost forever." Part of the problem here is that it is very easy to set up a blacklist as many folks have already done. So if an address has gained a bad reputation, understanding the context and understanding where it has gained this blacklist status is often a challenge. You can tell when a concept has gone perhaps a little too far when aggregators enter the fray. As is claimed on the dnsbl.info web site: "DNSBL Information provides a single place where you can check the status of your mail server's IP address on more than 100 DNS based blacklists."
The fact that software has its vulnerabilities should come as a surprise to no one. And the observation that such vulnerabilities have been exploited for criminal acts should also not really come as a surprise. So we should fix these vulnerabilities — right? However, sometimes this decision to fix the vulnerabilities in a deployed system transforms itself from a single operation of finding all vulnerabilities, patching the code and circulating the updates, to that of repeated iterations of collecting all currently known vulnerabilities, patching and circulating — and for some systems this process persists for years and becomes a task that is seemingly without end. The longer the software is deployed and the larger the population of deployed systems, the higher the incidence of detected vulnerabilities. Windows XP falls into that category of a widely deployed and now quite a venerable platform. XP appears to account for around a third of the currently deployed desktop and laptop systems in today's Internet. Microsoft has released a large number of patches for this system over the years, yet the vulnerabilities in the deployed population appear to persist. Worryingly, Microsoft has announced that further support for Windows XP will cease as of 8th April, 2014. To further complicate the issue, it appears that unlicensed copies of this particular operating system have been widely distributed over the years and with the centralised form of update management used by Microsoft, it appears that many users of these pirate copies of the system believe that applying updates to their system will invalidate their system. So we see continuing use of original vulnerable systems on an Internet where the volume and sophistication of the attack probes overwhelm the residual defences of these old unpatched systems. The persistence of so-called botnet armies of corrupted systems that are under remote control is one of the more disturbing outcomes of this situation. But the larger picture relates to the increasing dependency that we place on the networked environment as part of our lives and the concern that many of the elements that support this environment use neglected and highly vulnerable software. From the signs at airports, to cash terminals at retail outlets, to thousands of other deployments that range from the mundane to the vital, it appears that XP is still prolific and its vulnerabilities are a source of serious concern.
But it's not all a case of software vulnerabilities as end systems. We also see instances where the IP protocols are turned against us. One of the more effective denial of service attacks on today's network is a DNS reflection attack, where DNS resolvers can be turned into uncontrolled traffic generators, with a result that is capable to swamping many service providers with this imposition of unsolicited and unwanted traffic. We would not be so vulnerable to this form of attack if it were not so easy to pass packets through the networks with a crafted source address. But our efforts to convince the network operators that everyone benefits if they maintain so-called BCP-38 filters on their outbound traffic has largely fallen on deaf ears. The document, BCP-38, was published 13 years ago — as RFC 2827 in May 2000. So no one could really claim that this is too recent and they haven't had the time to get around to implementing it yet. This is an instance of a more general observation about our behaviour: when the assessment of the risk of an event occurs, even when multiplied by the assessment of directly incurred damage which may result from such an event is less than the marginal cost of mitigation, we simply don't act to try and reduce the level of exposure to risk. As long as these network operators believe it to be highly improbable that they themselves will be the targets of such an attack (or if the cost of such an attack is relatively minor), then they see little reason to spend money to mitigate the risk to their customers by maintaining rudimentary source address filters that radically narrow the scope of such attacks that rely on the ability to perform source address spoofing.
Can't we use all these clever networking technologies to track down all these network nasties and turn them off? There is a widely held impression, reinforced by the PRISM stories, that the online environment is one that admits little in the way of personal privacy and true anonymity. One is led to believe that the thick plumes of our digital exhaust are carefully stored and analysed. No deed goes unnoticed, and no act is truly anonymous. But this is probably a somewhat mistaken impression. Depending on the networks involved; the technologies used in the networks; whether or not the networks even perform rudimentary logging; and the attendant the issues of correlating various logs of all these activities; whether nefarious or not, then the true ability to perform such forms of extensive tracing and tracking is perhaps context dependant. The result is a network that appears to be an eclectic mixture of a set of fish bowls and dark alleyways, without any real way of being able to figure out in advance where precisely we are at any point.
Law enforcement agencies are exposed to the same variability which implies that in many ways, their ability to respond to cyber crime in ways that match society's expectations varies. Sometimes bad acts on the Internet are readily exposed, and the criminal perpetrators along with it. But other times we find LEAs are lacking the essential skills, resources, and basic forensic data to respond in a meaningful way and the private investigators are there to fill the gap.
The network is truly a place that has its dark and hostile corners and many bad deeds not only go unpunished, but often go undetected by all but the victim and the perpetrator. And, on the whole, for a vital public communications utility in today's world, that's probably not a very reassuring place to find ourselves.
Written by Geoff Huston, Author & Chief Scientist at APNIC
Follow CircleID on Twitter
Milton Mueller from Internet Governance Project writes: "In Montevideo, Uruguay [last week], the Directors of all the major Internet organizations — ICANN, the Internet Engineering Task Force, the Internet Architecture Board, the World Wide Web Consortium, the Internet Society, all five of the regional Internet address registries — turned their back on the US government. With striking unanimity, the organizations that actually develop and administer Internet standards and resources initiated a break with 3 decades of U.S. dominance of Internet governance. A statement released by this group called for 'accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing.' That part of the statement constituted an explicit rejection of the US Commerce Department's unilateral oversight of ICANN through the IANA contract. It also indirectly attacks the US unilateral approach to the Affirmation of Commitments, the pact between the US and ICANN which provides for periodic reviews of its activities by the GAC [Governmental Advisory Committee] and other members of the ICANN community."
Follow CircleID on Twitter
Kevin reported on this last night.
As you can see from the reactions to his post a lot of people are surprised, shocked and even quite upset that the DotGreen application has been withdrawn. It's not the only application for the string, which is why it was withdrawn, but to many people in the ICANN space it was the applicant everyone associated with the string.
The unfortunate reality of the new TLD process is that money speaks more loudly than anything else.
Applicants with deep pockets can beat off applicants with good intentions.
Personally I was shocked to hear the news that they'd thrown in the towel, but faced with the competition they're facing it's understandable.
Here's the full statement they released last night:
* * *
It is with great regret that we share the news that DotGreen Community, Inc. has withdrawn its application for .green from the new Top-Level Domain (TLD) program by the Internet Corporation for Assigned Names and Numbers (ICANN).
Six years ago, I had a vision for a new .green TLD that would serve the world, and boost environmentalism online, while contributing a new income stream to dedicated non-profit channels. Since then, we've been further inspired by the Green Community as we shared our vision globally. We have seen the successful launch of ICANN's new gTLD program, collaborated to grow a company specifically to apply for .green, created a non-profit public charity, and enjoyed interacting in the diverse Internet and ICANN communities.
Competition for .green surfaced at ICANN's big reveal on June 6, 2012, when existing Internet registry operators positioned themselves as managers for .green. Since then, DotGreen, supporters, and the global green community exercised all options within the framework of the new gTLD applicant guidebook and we would like to express our sincerest gratitude that the Green Community has had this conversation around a new TLD. The outpouring of public support included posting online comments and writing letters to both ICANN and the Governmental Advisory Committee (GAC) during every stage of the program, contributing to the multi-stakeholder process. We are very proud of that!
Despite these efforts, it is not possible for us to move forward at this time without an auction scenario that would award .green to the highest bidder.
We believe an auction is counterproductive to the collaborative nature of the green movement. Awarding .green management to the highest bidder, disregarding community support, collaborative partnerships, and business practices subverts the meaning and interests of the green movement, the Internet user public, and the ICANN multi-stakeholder model. This auction procedure undermines DotGreen's long history in the green community, and it negates the authenticity of DotGreen's application. A single string applicant such as DotGreen may also find financial and timeline requirements more challenging compared to portfolio application peers.
We are very glad we have introduced the .green TLD to the world, and remain hopeful that the .green TLD will benefit the environmental sustainability community, and would like to see .green become a vital part of the Internet ecosystem.
It has been an honor to participate in the international Internet community for the past several years. I am truly grateful for the supportive professionals, contracted partners, and friends we have met through the process. I'm so appreciative to all who have joined us for People & Planet events before ICANN meetings, learning right alongside DotGreen about local green initiatives to take back home to our own communities. DotGreen will keep on serving the world through the nonprofit foundation, but will unlikely continue our role in the ICANN community through positions in DNS Women, ALAC, and the NTAG.
We are excited about the DotGreen Foundation, and look forward to focusing on ongoing projects. Please visit us at our website www.dotgreenfoundation.org. It is our hope that the public will continue to stand with us to allow progress for the DotGreen Foundation vision to support programs and projects aimed at sustainability, which serve our planetary home. It is our full intent to remain strong advocates for green business, green technology, environmental stewardship, and green ideas.
Annalisa Roger, Founder
DotGreen Community, Inc.
* * *
It's a real shame, as they were the only applicant for the extension that had clear plans to use the extension to further the "green" movement.
They had a good team of people working with them, so hopefully they'll be snapped up quickly — assuming they still want to stay in the space.
Written by Michele Neylon, MD of Blacknight Solutions
Follow CircleID on Twitter
More under: Top-Level Domains
CircleID: The Boundary Between Sec. 230 Immunity and Liability: Jones v. Dirty World Entertainment Recordings
Out in the wilderness of cyberspace is a boundary, marking the limits of Sec. 230 immunity. On the one side roams interactive services hosting third party content immune from liability for that third party content. On the other sides is the frontier, where interactive content hosts and creators meet, merge, and become one. Here host and author blend, collaborating to give rise to new creations. Hosts herd authors towards a potentially desolate land desecrated with insinuation, defamation, and slanderous allegations. Here, Sec. 230 has no power to protect.
We have been to the frontier before. The lead case unfolded in 2008: Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008). Here, according to the court, third-party authors that wished to post to Roommates.com were required to fill out a questionnaire and were required to answer questions that were alleged to violate federal and state housing discrimination laws. Where the host requires third-parties to answer certain questions, and answer in ways that are problematic, "such acts constituted the 'creation or development of information' and thus made the site an 'information content provider' within the scope of 47 U.S.C. § 230(c) and (f)(3)." Both host and third-party are now potentially liable for the created content.
Jones v. Dirty World Entertainment Recordings, LLC (Eastern District Kentucky Aug. 2013) is the latest case to explore the frontier, and this litigation has been explored through two trials. From the first trial, we are informed that:
Defendant Dirty World, LLC operates, from its principal place of business in Arizona, an Internet web site known as "the dirty.com." This web site invites and publishes comments by individuals who visit the site, and defendant Hooman Karamian, a/k/a Nik Richie ("Richie"), responds to those posts and publishes his own comments on the subjects under discussion.
Plaintiff Sarah Jones is a citizen of Kentucky; a resident of Northern Kentucky; a teacher at Dixie Heights High School in Edgewood Kentucky; and a member of the Cincinnati BenGals, the cheerleading squad for the Cincinnati Bengals professional football team.
The conflicted ensued over comments posted to Defendant's website about plaintiff.
[T]he evidence conclusively demonstrates that these postings and others like them were invited and encouraged by the defendants by using the name "Dirty.com" for the website and inciting the viewers of the site to form a loose organization dubbed "the Dirty Army," which was urged to have "a war mentality" against anyone who dared to object to having their character assassinated.
Specifically, defendant Richie added his own comments to the defamatory posts concerning plaintiff. For example, on December 7, 2009, a third-party posted, under a large photo of plaintiff: Nik, here we have Sarah J, captain cheerleader of the playoff bound cinci bengals . . Most ppl see Sarah as a gorgeous cheerleader AND highschool teacher . . yes she's also a teacher . . but what most of you don't know is . . Her ex Nate . . cheated on her with over 50 girls in 4 yrs . . in that time he tested positive for Chlamydia Infection and Gonorrhea . . so im sure Sarah also has both . . whats worse is he brags about doing sarah in the gym . . football field . . her class room at the school she teaches at DIXIE Heights. To this, Richie added his own tagline, in bold: "Why are all high school teachers freaks in the sack? — nik." The tagline and original message appear on one page as a single story.
[T]he evidence conclusively demonstrates that these postings and others like them were invited and encouraged by the defendants by using the name "Dirty.com" for the website and inciting the viewers of the site to form a loose organization dubbed "the Dirty Army," which was urged to have "a war mentality" against anyone who dared to object to having their character assassinated.
Specifically, defendant Richie added his own comments to the defamatory posts concerning plaintiff. For example, on December 7, 2009, a third-party posted, under a large photo of plaintiff:
Nik, here we have Sarah J, captain cheerleader of the playoff bound cinci bengals . . Most ppl see Sarah as a gorgeous cheerleader AND highschool teacher . . yes she's also a teacher . . but what most of you don't know is . . Her ex Nate . . cheated on her with over 50 girls in 4 yrs . . in that time he tested positive for Chlamydia Infection and Gonorrhea . . so im sure Sarah also has both . . whats worse is he brags about doing sarah in the gym . . football field . . her class room at the school she teaches at DIXIE Heights.
To this, Richie added his own tagline, in bold: "Why are all high school teachers freaks in the sack? — nik." The tagline and original message appear on one page as a single story.
For the court, defendant's website has crossed into the frontier and is a participant in the content creation. Not only is defendant driving third-parties to make scandalous comments, defendant, by adding comments "effectively ratified and adopted the defamatory third-party post." According to the court, defendant continued to drive third parties by commenting: "I love how the DIRTY ARMY has war mentality;" "Never try to battle the DIRTY ARMY;" and "You dug your own grave here Sarah." The court concludes, defendant "played a significant role in 'developing' the offensive content such that he has no immunity under the CDA."
There is a boundary between neutral host and content creator. When an interactive service traverses that boundary, the host potentially moves out from underneath the protection of Sec. 230 immunity.
Written by Robert Cannon, Cybertelecom
Follow CircleID on Twitter
In Montevideo, Uruguay this week, the Directors of all the major Internet organizations – ICANN, the Internet Engineering Task Force, the Internet Architecture Board, the World Wide Web Consortium, the Internet Society, all five of the regional Internet address registries – turned their back on the US government. With striking unanimity, the organizations that actually develop and administer Internet standards and resources initiated a break with 3 decades of U.S. dominance of Internet governance.
A statement released by this group called for “accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing.” That part of the statement constituted an explicit rejection of the US Commerce Department’s unilateral oversight of ICANN through the IANA contract. It also indirectly attacks the US unilateral approach to the Affirmation of Commitments, the pact between the US and ICANN which provides for periodic reviews of its activities by the GAC and other members of the ICANN community. (The Affirmation was conceived as an agreement between ICANN and the US exclusively – it would not have been difficult to allow other states to sign on as well.)
Underscoring the global significance and the determination of the group to have a global impact, the Montevideo statement was released in English, Spanish, French, Arabic, Russian and Chinese. In conversations with some of the participants of the Montevideo meeting, it became clear that they were thinking of new forms of multistakeholder oversight as a substitute for US oversight, although no detailed blueprint exists.
But that was only the beginning. A day after the Montevideo declaration, the President and CEO of ICANN, Fadi Chehadi – the man vetted by the US government to lead its keystone Internet governance institution – met with Brazilian President Dilma Rousseff. And at this meeting, Chehade engaged in some audacious private Internet diplomacy. He asked “the president [of Brazil] to elevate her leadership to a new level, to ensure that we can all get together around a new model of governance in which all are equal.” A press release from the Brazilian government said that President Rousseff wanted the event to be held in April 2014 in Rio de Janeiro. The President of ICANN thus not only allied himself with a political figure who has been intensely critical of the US government and the NSA spying program, he conspired with her to convene a global meeting to begin forging a new system of Internet governance that would move beyond the old world of US hegemony.
Make no mistake about it: this is important. It is the latest, and one of the most significant manifestations of the fallout from the Snowden revelations about NSA spying on the global Internet. It’s one thing when the government of Brazil, a longtime antagonist regarding the US role in Internet governance, gets indignant and makes threats because of the revelations. And of course, the gloating of representatives of the International Telecommunication Union could be expected. But this is different. Brazil’s state is now allied with the spokespersons for all of the organically evolved Internet institutions, the representatives of the very “multi-stakeholder model” the US purports to defend. You know you’ve made a big mistake, a life-changing mistake, when even your own children abandon you en masse.
Here at the Internet Governance Project we take only a grim satisfaction in this latest turn of events. We have been urging the USG to end its privileged role and complete the privatization of the DNS management for nearly ten years. The proper substitute for unilateral Commerce Department oversight, we argued, was not multilateral “political oversight” but an international agreement articulating clear rules regarding what ICANN can and cannot do, an agreement that explicitly protects freedom of expression and other individual rights and liberal Internet governance principles. We have heard every argument imaginable about why this did not have to happen: no one really cared about the governance of the DNS root; there was no better alternative; the rest of the world secretly wanted the US to do this; etc., etc. A combination of arrogance, complacency and domestic political pressure prevented any action.
Had that advice been heeded, had the US sought to divest itself of its unilateral oversight on its own initiative, it could have exercised some control over the transition and advanced its cherished values of freedom and democracy. It could have ensured, for example, that an independent ICANN was subject to clear limits on its authority and to new forms of accountability, which it badly needs. Now the U.S. has lost the initiative, irretrievably. The future evolution of Internet name and number governance, at the very least, is no longer up to them.
CircleID: Registrars That Complied With "Shakedown" Requests May Now Be in Violation of ICANN Transfers Policy
At the time we posted 'Whatever Happened To Due Process,' we were unaware that we were just one of many registrars receiving these notices from the London (UK) Police.
We have since been made aware that this was part of a larger initiative against the BitTorrent space as a whole, and that most if not all of the other registrars in receipt of the same email as us folded rather quickly and acquiesced to the shakedown orders.
Since there were no charges against any of the domains and no court orders, it may be at the registrars' discretion to play ball with these ridiculous demands. However — what they clearly cannot do now, is prevent any of those domain holders from simply transferring out their names to more clueful, less wimpy registrars.
City of London Police Shakedown Request in ProgressIf any of those registrars denied the ability to do that, then they would be in clear violation of the ICANN Inter-Registrars Transfer Policy.
Section 3, Obligations of The Registrar of Record clearly spells out the reasons why a registrar may deny a transfer-out request, and they are limited specifically to cases of fraud (the domain was paid for fraudulently), a UDRP proceeding or, hey, get this one "Court order by a court of competent jurisdiction", as well as some administrative reasons (like the domain was registered less than 60 days ago).
What is clearly absent from the list of reasons why a registrar of one of these torrent sites that has been taken offline by this, is "because some guy sent you an email telling you to lock it down".
Any registrar that has taken one of these sites offline that now impedes the registrants of those domains from simply getting their domain names out of there and back online somewhere else will then be subject to the TDRP — Transfer Dispute Resolution Policy and if they lose (which they will) they will be subject to TDRP fees assessed by the registry operator, and to quote the TDRP itself "Transfer dispute resolution fees can be substantial".
This is why it is never a good idea to just react to pressure in the face of obnoxious bluster — in the very act of trying to diffuse any perceived culpability you end up opening yourself to real liability.
Even when Verisign seizes a domain out from under you (like they did one of ours earlier this year ) it at least happens under a "sealed warrant" — blessing it with a veneer of Soviet-Era style legality.
Written by Mark Jeftovic, Co-Founder, easyDNS Technlogies Inc.
Follow CircleID on Twitter