Afilias, headquartered in Dublin, is set to float on AIM with new shares issued by the company expected to raise approximately $100m. Hal Lubsen, Chief Executive Officer of Afilias says: "Today's announcement is an important step in the next phase of our growth, as we look to be a key player in the new programme of TLDs, and make selective acquisitions to increase the breadth and depth of our services and reach."
From the announcement:
"Afilias today announces its intention to apply for admission of its issued and to be issued Ordinary Shares to trading on AIM, a market operated by the London Stock Exchange ("Admission") and to conduct a placing of its Ordinary Shares with institutional and professional investors (the "Placing"). The Placing will comprise an offer of new and existing Ordinary Shares.
The Group is one of the world's leading providers of advanced internet services that enable Top Level Domains ("TLDs") to operate. Afilias provides services, both as Registry Operator for TLDs and as Registry Service Provider to third parties that act as Registry Operators for TLDs."
Proceeds from the Placing will be used by Afilias to:
- Acquire contested nTLDs in the upcoming auctions;
- Fund attractive acquisition opportunities including existing TLD assets and operating businesses in the Registry Operator and Registry Service Provider areas; and
- Develop the Group's existing Registrar business to become an integrated operation.
Afilias in total supports Registries for more than 20 million Domain Names, comprising approximately 6.6 million Domain Names as a Registry Operator and 13.5 million Domain Names as a Registry Service Provider.
Follow CircleID on Twitter
InterConnect announces a second date for its "Master Class on Internet Governance" starting 17 – 21 November 2014 (see first announcement). InterConnect provides the following information regarding the course.
* * *
The Master Class in Internet Governance and Policy is designed to help participants understand the core principles of internet governance. The course will run 17 – 21 November 2014 (Download Course Brochure)As policymakers, regulators, private sector and civil society grapple with the question 'who should run the Internet', Internet governance is headline news globally. Yet few organisations that depend on the Internet truly understand what Internet governance means, or how to shape it.
The Master Class in Internet Governance and Policy is a highly interactive course which equips delegates with the history, technical, legal and geographic underpinnings of the Internet, its key international policy issues and venues, and the most up to date information needed to be an effective advocate for their strategic interests.
The course is intended for:
- Professionals at ministries, national regulatory authorities, network operators and service providers.
- Individuals in the private sector representing organisations operating in the Internet industry.
- Academics, journalists and others with an interest in the Internet's legal background, political impact and effect on international relations.
For more information about InterConnect's Master Classes visit:
Follow CircleID on Twitter
More under: Internet Governance
Just over a quarter of European social network users feel in complete control with regard to their personal data. More than two-third are concerned that their personal data held by companies may be used for a purpose other than that for which it was collected. Only one-third are aware of the existence of a national public authority responsible for protecting their rights regarding their personal data (TNS Opinion & Social, June 2011:1-2).Introduction
The article discusses challenges to privacy protection in social media platforms, with a particular focus on the principle of user consent. User consent is a cornerstone in the data protection regime in Europe and elsewhere, implying that users are entitled to control over the use of their personal data (Solove, 2012; Kosta, 2013; Bygrave 2002). Based on a Danish study conducted amongst 68 high school students in October 2013, the article argues that in relation to Facebook, user consent has de facto become the price for participating and for gaining access to a social infrastructure. Moreover, while social media companies such as Facebook increasingly speak to their human rights responsibility, their business model is based on extensive data collection and third party sharing, which potentially contradicts basic data protection principles. As such, there is an increasing discrepancy between the individual safeguards stipulated in the EU data protection regime - elaborated in section 3 below - and the users’ means of exercising these rights on social media platforms.
In the literature on youth, privacy and social media, at least two conflicting perspectives on privacy are frequently presented. On the one hand, those arguing that the age of privacy is over (Brin, 1998; Kirkpatrick, 2010)1 and that youth prioritise convenience over privacy, framed as the ‘privacy paradox’ (Barnes, 2007; Nissenbaum 2010). On the other hand those that argue that youth do care, however, they balance opportunities and risks in their use of social media sites (Tufekci, 2007), which creates a ‘privacy dilemma’ (Brandtzæg, Lüders et al., 2010). In her study on Facebook, Raynes-Goldie argues that the design and architecture of Facebook is based on radical transparency, whereas users have some expectation of privacy. This divergence between the goals of Facebook and its users is part of the privacy dilemma and one of the reasons users increasingly face increased privacy risks (Raynes-Goldie, 2012:74-75). Also scholars such as boyd (boyd, 2014) have illustrated the complex nature of privacy as it plays out in social media platforms, arguing that the teens’ understanding of privacy is related to the ability to control a social situation rather than particular properties of information. The present study, following a number of surveys on the topic2, shows that privacy in relation to a known circle of friends and family remain a concern to the respondents, whereas they are less occupied with the treatment of their data by Facebook and affiliated companies. It also illustrates the social strategies and interpretations the respondents apply to manage their privacy when using social media.
The article opens with a brief introduction to privacy as a human right, followed by a discussion of some of the critique that has been raised towards social media platforms vis-à-vis the right to privacy by, for example, the European Commission, the Council of Europe, and the Irish Data Protection Agency. Second, it presents the findings from a study conducted amongst Danish high school students in October 2013 concerning their privacy perceptions and practices when using social media platforms such as Facebook. Thirdly, it discusses the implications of these findings in relation to the principle of user consent as a means of providing individuals with control over their personal information in the context of social media platforms.2. The right to privacy
The right to privacy is a core component of international human rights law stipulated in Article 12 of the Universal Declaration of Human Rights (United Nations, 1948) and in Article 17 of the International Covenant on Civil and Political Rights (United Nations, 1966). It is also part of numerous international and regional human rights treaties and conventions such as the European Convention on Human Rights (Council of Europe, 1950). The right to privacy protects specific private domains such as a person’s body, family, home and correspondence and restricts the collection, use and exchange of personal data3 about the individual, often referred to as informational privacy (Westin, 1967). A common denominator for the different areas of privacy is access control (Rössler, 2007). This includes informational privacy (control over what others know about us); decisional privacy (control over private decisions and actions); and local privacy (control over a physical space). Individuals have a right to privacy not only in the private domain but also when acting in public spaces.
Within the member states of the Council of Europe the right to privacy is protected by Article 8 of the European Convention on Human Rights. The European Court of Human Rights has stated that while Article 8 essentially protects the individual against arbitrary interference by the state, there may be positive obligations inherent to an effective respect for privacy (K.U. v. Finland, 2 December 2008). As regards the internet, a state could arguably be liable in respect of third parties who store data on individuals (Council of Europe, October 2013). Up until now, the court has not resolved any cases dealing specifically with the collection, use and distribution of personal data by social media companies4.
As with other human rights, the protection and enforcement of the right to privacy relies on national measures such as data protection laws and mechanisms of oversight. Contrary to most other regions, the EU countries are bound by a common Data Protection Directive (European Commission, 1995), which entail various provisions and safeguards concerning the capture and flow of personal information. In an increasingly digital world, however, numerous papers and civil society statements have warned against the erosion of informational privacy and called for globally applicable data protection standards5. The ongoing reform of the EU data protection regime (Dix, 2013; European Commission, 2014)6, the White Paper on Consumer Data Privacy in a Networked World from the Obama Administration (The White House, 23 February 2012), and the revised OECD Privacy Framework (OECD, 2013) are all examples of current policy responses addressing this concern. In reaction to the revelations from former NSA-contractor Edward Snowden, the UN General Assembly in December 2013 adopted the first resolution on the right to privacy in the digital age (United Nations General Assembly, 18 December 2013), acknowledging that the right to privacy is under strong pressure.3. Online privacy and social network sites
In recent years, there has been an increasing focus on the way social media platforms handle personal data (Brandtzæg, Lüders et al., 2010; Raynes-Goldie, 2012; Bechmann, 2014), as well as the associated challenges related to the principle of user consent (Mantelero, 2014; Bechmann, 2014). In Europe, concern has been raised, for example, by the EU data protection authorities (Article 29 Data Protection Working Party, 22 June 2009), in the EU data protection reform package (Dix, 2013) by the Council of Europe (Council of Europe, April 2012), and by several of the national data protection authorities, not least of which the Irish (Irish Data Protection Commissioner, 21 September 2012). Some of the key concerns and recommendations from these bodies are highlighted below.
The Opinion from the EU data protection authorities (Article 29 Working Party) concerns the interrelation between the EU Data Protection Directive and social networking sites such as Facebook (Article 29 Data Protection Working Party, 22 June 2009). The opinion stresses that the Data Protection Directive applies to social network providers in most cases, even if their headquarters are located outside of the European Economic Area. Moreover, it iterates the obligation on social network sites to provide their users with clear information about the purposes and different ways in which they process personal data. The default settings of the service has to be privacy-friendly and allow users to specifically consent to any access to their profile’s content. Also, users should be given an opt out option before their personal data is made available to others. In the case of data being used for personalised advertisements, this requires a prior consent by the user. With regard to remedies, the social network sites should make available a tool for lodging complaints.
The EU data protection reform package reiterates several of the points raised above. One of the most controversial topics has been a proposed ‘right to be forgotten’. The right implies that when there are no legitimate grounds for retaining personal information, the data has to be deleted. According to Peter Hustinx, the European Data Protection Supervisor, the data would be attributed some sort of expiration date7. Hustinx has stressed that in the online domain economic forces work against the individuals’ right to privacy, hence there is a need to strengthen the request for data deletion. “From an economic perspective, it is more costly for a data controller to delete data than to keep them stored. The exercise of the rights of the individual therefore goes against the natural economic trend”8. Other elements of the reform package with an impact on social network sites include a right for individuals to transfer personal data from one service provider to another (data portability); stronger requirements on consent when required for data processing; and a request for ‘privacy by design’ and ‘privacy impact assessment’. The latter implies risk analysis as part of new projects that may affect users right to privacy (European Commission, 2012).
The Council of Europe has addressed the privacy implications of social networking services in a recommendation from 2012 (Council of Europe, April 2012). The recommendation highlights two factors that threaten the right to private life. First, the lack of privacy-friendly default settings. Second, the lack of transparency about the purposes for which personal data is collected. To counter these threats a number of actions are proposed, many of which echo the concern raised at the EU level. For example, social network sites should provide users with explanations of the terms and conditions of their services in a form that is easily understandable; they should - by default - limit access by third parties to contacts identified by the user; and when allowing third party applications to access users’ personal data, they should allow users to specifically consent to access to different kinds of data.
In summary, the concern and recommendations raised in the above-mentioned sources focus on means of making it more transparent and easy for the user to limit access to their data - e.g. privacy-friendly policies and settings; and on making demands on the company that process and exchange personal data - e.g. a request for user consent prior to release targeted advertisement. Moreover users should have easy access to complaint mechanisms in case of alleged privacy breaches. Keeping in mind these concerns and recommendations, the following section presents a Danish study on how high-school students frame and manage their online privacy when participating in social media platforms.4. Methodology
The present qualitative study was initiated by the consortium ‘Digital Youth’ (Digitale Unge), consisting of the Danish Media Council for Children and Young People, the Danish Consumer Council, Digital Identity, and the Danish Institute for Human Rights9. The study was a follow-up to a quantitative survey on youth, social media and privacy conducted by Digital Youth in February 2013 amongst 327 teens (12-18) and 404 adults (30-59) using a web-based questionnaire10. Based on the first survey, it was decided to focus in on more detail about how youth respondents perceive and manage privacy and control over personal data in their social media practices. The study follows several related surveys - as mentioned in note ii.
The study was conducted in October 2013 and consisted of eleven focus group interviews carried out in six high-schools (gymnasier) located in the Copenhagen and Aarhus areas. The study included 68 students in total, with four to eight participants of mixed gender in each group. The students were in each case selected by one of the high school teachers who had asked around for students interested in the topic and willing to participate in the study. The interviews were audio recorded and lasted approximately one hour. They all followed a semi-structured interview guide (Thagaard, 2004; Kvale, 2008) focusing on three main themes. First, the role that social media platforms play in the everyday life of the respondents. Second, the strategies deployed to protect or control privacy, and third, the level of knowledge and awareness with regard to privacy and social media. The interviews were conducted in an open and explorative manner, allowing the respondents to elaborate on their experiences and interpretations of practice in relation to each theme.115. Results from the Danish study The role of social media in the everyday life of the respondents
The first category of questions concerned the role that online media play in the respondents’ lives. The respondents mentioned Facebook, Instagram, Twitter and Snapchat as widely used social media services, with Facebook as the key platform from which virtually all communication originates. All of the interviewed had a profile on Facebook and shared a common expectation of being reachable via Facebook: “It is kind of expected that everyone has a Facebook profile. That you can communicate with everyone there.” (17 year old girl). It was said that if you want to participate socially, you need to be on Facebook: “There is a party at the school tomorrow. It might be announced on the school’s website, but no one has checked it out there. Everyone is invited for an event on Facebook. So it’s also used for practical information. For example that tickets can be bought on a website. And that is not mentioned on the school’s website.” (17 year old boy)
The respondents depicted themselves as "always on" via their smartphone, and described how Facebook was used for several purposes from entertainment, maintenance of social networks to ‘staying updated ‘ on social events. Moreover, relationships with other people were also reinforced and confirmed via Facebook. For example as explained by several of the respondents “you are not truly a couple until it has been announced on Facebook”. The respondents highlighted their personal investment into their social media profiles and how their Facebook profile has become an extension of themselves. As one 17 year old boy describes it, when picturing the scenario of Facebook closing down one day: “… it is kind of like you have invested so much time in it and so much focus on how you present yourself. And this is your friends. So it’s kind of like a project. It’s part of you. So it’s a bit like not being able to talk. It’s a tool of communication which is very integrated in you.” But perhaps most importantly the respondents view their social media profiles as an integrated part of their identity: "One's life is not just pictured on Facebook. It is Facebook." (17 year old boy) This is similar to findings from Bechmann’s research amongst fifteen high school students (Bechmann, 2014), which suggests that Facebook has become so large and dominant a social platform - not least in Denmark - that people 'have to be on Facebook' in order to participate in social life.Strategies used to protect and control privacy
The second group of questions concerned the strategies deployed to control privacy. In all of the groups there were commonly shared norms and boundaries on what is respectively good and bad behaviour in relation to sharing. Some of the mentioned examples of what not to share included emotional status updates about personal matters - parents' divorce, break up with girlfriends and boyfriends, etc. The respondents all used Facebook's ‘privacy tools’, for example, to create groups and to control access to their timelines, which is a chronological display of a user's history on Facebook. Yet they were also aware of the limits of these tools: ”And if there are some embarrassing pictures from some parties then I usually make them invisible to all, for example if someone tags a picture of me at a party where there has been an embarrassing situation. Then you can make it “not allowed on timeline”. But I can’t delete the picture.” (16 year old boy)
Many of the respondents’ activities on Facebook took place in thematic groups created for specific purposes and for invitees only. The groups usually reflected already established social contexts such as the class, the football team, etc. In addition to the privacy tools provided by Facebook, the respondents relied on shared social norms to manage their privacy. For example many described a ‘filtering process’ that pictures went through either before or after they were posted. Again, some pictures would not be posted on the Facebook timeline as they were deemed “not suitable for Facebook”. Others would be deleted just after being posted if deemed unfit in comments by peers. As one 17 year old girl put it: ”But you also look at the picture yourself one more time and think if you would like it yourself to have it posted. (.) There are also pictures that are taken to look ugly just for fun. But in that case it doesn’t even cross my mind to post it on Facebook. That is just not Facebook material.” The sense of shared norms created an expectation among the respondents that they might control their social privacy: “It is also a sort of unwritten rule that if you hint that something needs to be deleted, then the picture should be deleted. You can write ”Yieks!” or ”ehr”. Or just ”delete”. Then it should be deleted within one minute. I mean, you see it immediately on your mobile and then you can write. Then it will be deleted quite quickly.” (17 year old girl)
While users have a right to an effective remedy when their right to privacy is potentially violated, the above findings indicate that the respondents have limited knowledge of these privacy rights and how to address a potential privacy violation12. The study found, however, that the respondents feel somewhat protected by the sense of shared social norms, most notably the ability to have undesired content deleted.Level of knowledge and awareness with regard to privacy
The third theme focused on the respondents’ level of knowledge regarding potential privacy risks. While the respondents were conscious of controlling their privacy in relation to friends and family, they had more difficulty in relating to privacy risks at state or company level. This is similar to previous findings (Bechmann, 2014; boyd, 2014) stressing that youth are more concerned with controlling their data in relation to their social circles as compared to potential privacy risks towards the state or private companies. Some did talk about personal experiences discovering that one of their images have been used by others to create fake profiles or remembered to have been puzzled over how other people had found out information about them. Mostly the respondents found it hard to imagine that their personal data would be of interest to anyone. Frequently “surveillance” was described as something remote that would take place in 'totalitarian states' far away. "I feel it's not a problem (ed. state surveillance). It's unpleasant when I think about it. But then I just want to look at Facebook again and then it does not matter." (16 year old girl). In cases where the respondents were asked to think further about potential state surveillance, it was described as in principle 'not okay', 'uncanny' and 'uncomfortable': "... Just the thought is indeed uncanny. If the state monitors you personally. They do not have the right to do that and they shouldn’t have the right either." (16 year old boy). “It's a scary thought. But this only takes place in totalitarian places. But then again for instance in the United States right now where we have the NSA with Edward Snowden and all that. Where they spy on different people through social media and Google. It's not a very comforting thought." (16 year old girl)
When asked specifically about the terms and conditions they had consented to, none of the interviewed had read them. Moreover, the majority had created their profiles at a time when they were under 13. "You have heard that you probably should read those terms and conditions, because we do not know our rights. But we were very young when we created it. And then we just clicked yes." (17 year old girl). "I guess it doesn’t matter to read it (terms and conditions) because if you want a profile then you need to accept them. It doesn’t matter what it says.” (16 year old boy)
As discussed above, European data protection (as well as data protection regimes in other part of the world) is based on the principle of consent, indicating that the individual has a right to decide whether to share his/her personal information or not. The above findings indicate that the respondents perceive their consent to Facebook’s terms of service as a ‘tick in a box’ needed in order to gain access to a crucial social infrastructure. None of the respondents had read the terms before consenting, and several stated that if the service was “doing bad stuff” they presumably would have heard about it.The study: conclusions and perspectives
The study revealed that the respondents were very conscious about controlling their privacy vis-à-vis their social circles, i.e., to protect their self-representation and flow of information amongst their peers. Contrary to this, privacy risks related to surveillance13 and commercial use received limited attention. “Maybe my messages are subject to surveillance, but what can they use it for? You decide for yourself, what to share.” (16 year old girl). The quote is indicative of a sentiment that many of the interviewed shared, namely that you exercise privacy control by decisions on what to share on Facebook and what not, yet once information is ‘out there’, your ability to exercise control is non-existent. Also, the respondents had limited knowledge of privacy and data protection as a right that the individual might claim. On the contrary several of the respondents expressed that by joining Facebook you sign off your rights, in particular related to your photos. Since Facebook is seen as the social infrastructure, the sense amongst the respondents was that in reality you have no choice but to accept Facebook’s terms and conditions.
In recent years, companies such as Facebook increasingly speak to their human rights responsibility and endorse the UN Guiding Principles on Business and Human Rights (United Nations Human Rights Council, 21 March 2011). An often quoted example is the Global Network Initiative - to which Facebook is a member - that aims to strengthen internet companies’ compliance with human rights standards on privacy and freedom of expression 14. This commitment to privacy, however, is based on voluntary codes of conduct, thus it is largely up to the companies to decide on their data protection practices, with limited means of holding them accountable for adverse privacy impacts.
In summary, the above mentioned findings support many of the concerns addressed by the European Commission, the Council of Europe, and the Irish data protection authority in the previous section. The interviewed do not feel they have control over their data once submitted; they have not read the terms and conditions they have consented to; and they do not perceive privacy as a right they may claim, for example, via the Data Protection Agency. As such there is a discrepancy between the principle of user consent and the respondents’ perceived lack of control over their personal data when participating in social media platforms.6. Conclusion: alternatives to user consent?
The study highlights that a data protection regime built around the notion of user content do not adequately address the unequal power relation between a company perceived to provide a social infrastructure and users in demand of that service. In other words, the value of user consent as a data protection safeguard diminishes if users perceive no alternatives but to accept the terms and conditions of a given service.
Several scholars have argued that there is a need to fundamentally rethink the modalities of data protection. One of the alternative models is provided by Nissenbaum, who proposes the notion of contextual integrity as a normative framework built on the premise that different contexts carry different informational norms (Nissenbaum, 2010). In this approach a ‘one-size-fits-all’ privacy concept is replaced by a framework that places emphasis on the situational systems of rules governing information flows. Accordingly, the key challenge is to ensure that information flows appropriately, and to strengthen the individuals information control in various contexts. In consequence, privacy invasive behaviour is related to improper (out of context) sharing and use of personal information. Nissenbaum suggests to articulate the norms that are to guide specific online practices based on the well-known social situations that these practices resemble, such as information search or socialising with family and friends. As such, limitations in information flows should not solely depend on user consent but rather on context-appropriate norms. “We must articulate a backdrop of context-specific substantive norms that constrain what information websites can collect, with whom they can share it, and under what conditions it can be shared” (Nissenbaum, 2011:32). The concept is suggested both as a framework for evaluating systems that process personal information, and when designing new systems. Applying such an approach to, for example, Facebook would require analysis on norms and rules guiding similar social situations and subsequently use these to prescribe the specific norms that Facebook should adhere to when processing personal data. An agreed norm about non-disclosure of other peoples contact information, for example, would imply that this was not allowed within the Facebook platform unless specifically requested by the person in question.
Other scholars have argued that in the era of ‘big data’ the principle of ‘privacy by consent’ has become increasingly meaningless and should be replaced by ‘privacy by accountability’ including stricter means of holding companies accountable for how they use data (Mayer-Schönberger, 2013:173-75). Mayer-Schönberger has argued that in the age of big data much of data’s value is in secondary uses that were not foreseen when the data was collected. Hence, data protection should place less emphasis on data collection and more on the subsequent uses of data. Data is no longer collected based on a specific purpose and an informed user consent, on the contrary, the purpose of collecting the data is frequently formulated in broad generic terms and accepted by the user with limited sense of what the consent implies. “The ability to capture personal data is often built deep into the tools we use every day, from Web sites to smartphone apps” (Ibid: xx). Coupled with the fact that personal data represents commercial value to an extent not previously seen, it makes no sense to rely on user content as the primary data protection mechanism, the argument goes. In consequence, Mayer-Schönberger suggests to focus on increased accountability for the companies that use data and to increase the power of data protection authorities as safeguards between the individual and data processing companies such as Facebook. In the words of Mayer-Schönberger; hold companies liable when harm to data subjects occurs, rather than limit their means of data collection. “I suggest to take the individual out of the equation and give data protection authorities much more teeth in order to enforce data protection law vis-a-vis companies”15.
While alternative data protection schemes based on contextual integrity or stronger enforcement regimes may have immediate appeal, they both entail problems as well. The contextual integrity approach would require detailed analysis of a number of social contexts and situations, most likely associated with different norms across countries. As such it raised a number of challenges and provides limited guidance in relation to implementation. Moreover, it would require a complete rethinking of the current data protection regime. As for the accountability approach, this seems to ignore or downplay the fundamental disconnect between the commercial value that personal data hold for a company and the individuals’ right to privacy. Trusting that a principle of company accountability coupled with stronger enforcement mechanisms will be enough to safeguard the individual’s right to privacy seems overly optimistic in an age where personal data holds unprecedented commercial value, and where harvesting of personal data is the core of the online business model. Yet, the current model based on user consent is also not convincing as illustrated in this article. This is ironic, given the fact that the current data protection reform within the EU - as well as within the Council of Europe – remain anchored in precisely user consent, despite the decreasing relevance of this mechanism as a measure that will de facto preserve a right to user control over personal data in the online environment.References
Article 29 Data Protection Working Party (June 22, 2009). Opinion 5/2009 on online social networking. Brussels: EC Justice.
Barnes, S. B. (2007). A privacy paradox: Social networking in the United States. First Monday 11(9).
Bechmann, A. (2014). Non-informed Consent Cultures: Privacy Policies and App Contracts on Facebook. Journal of Media Business Studies 11 (1): 21-38.
boyd, d. (2014). It's complicated : the social lives of networked teens. New Haven: Yale University Press.
Brandtzæg, P. B., Lüders, M., Skjetne, S.H. (2010). "Too many Facebook "Friends"? Content Sharing and Sociability Versus the Need for Privacy in Social Network Sites." Journal of Human-Computer Interaction 26 (11-12): 1006-1030.
Brin, D. (1998). The Transparent Society. New York: Perseus Books.
Bygrave, L.A. (2002). Data Protection Law. Approaching Its Rationale, Logic and Limits, Kluwer Law International, The Hague, London, New York.
Council of Europe (1950). Convention on the Protection of Human Rights and Fundamental Freedoms. Strasbourg: Council of Europe.
Council of Europe (1981). Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. Strasbourg: Council of Europe.
Council of Europe (April 16, 2014). Recommendation of the Committee of Ministers to member states on a guide on human rights for Internet users. Strasbourg: Council of Europe.
Council of Europe (April 2012). Recommendation CM/Rec(2012)4 of the Committee of Ministers to member States on the protection of human rights with regard to social networking services. Strasbourg: Council of Europe.
Council of Europe (October 2013). Factsheet - New technologies. Strasbourg: Council of Europe, 7.
Dix, A. (2013). EU data protection reform opportunities and concerns. Intereconomics 48 (5): 268-285.
European Commission (1995). EU Directive on Data Protection (95/46 EC). Brussels: EC.
European Commission (2012). How will the data protection reform affect social networks? Brussels: EC. Retrieved February 4, 2013, from http://ec.europa.eu/justice/data-protection/document/review2012/factsheets/3_en.pdf
European Commission (2014). Progress on EU data protection reform now irreversible following European Parliament vote, MEMO/14/186 - 12/03/2014. Brussels, EC.
Irish Data Protection Commissioner (September 21, 2012). Report of RE-Audit. Portarlington: Irish Data Protection Commissioner.
Jørgensen, R. F., Hasselbach, G., Leth, V. (November 2013). Unges private og offentlige liv på sociale medier (Youths´private and public life on social media). Copenhagen: Digital Youth.
Kirkpatrick, M. (January 9, 2010). Facebook´s Zuckerberg Says The Age of Privacy is Over. ReadWriteWeb. Retrieved July 10, 2014, from http://www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php.
Kosta, E. (2013). Consent in European data protection law. Leiden: Brill Nijhoff.
Kvale, S. (2008). Interviews: an introduction to qualitative research interviewing. London: SAGE.
Mantelero, A. (2014). Defining a new paradigm for data protection in the world of Big Data analytics. The Second ASE International conference on Big Data and Computning May 27-31, 2014. Stanford University, ASE@360 Open Scientific Digital Library.
Mayer-Schönberger, V. C. K. (2013). Big data : a revolution that will transform how we live, work, and think. Boston: Houghton Mifflin Harcourt: 173-175.
Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Dædalus, the journal of the American Academy of Arts & Sciences 140 (4): 32-48, 32.
Nissenbaum, H. F. (2010). Privacy in context : technology, policy, and the integrity of social life. Stanford, Calif.: Stanford Law Books.
O´Reilly, T. (2005). What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. O'Reilly. Retrieved September 2, 2011, from http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html.
OECD (2013). The OECD Privacy Framework. Paris: OECD.
Public Voice (November 3, 2009). The Madrid Privacy Declaration - Global Privacy Standards for a Global World. The Public Voice. Retrieved September 2, 2011, from http://thepublicvoice.org/madrid-declaration/.
Raynes-Goldie, K. (2012). Privacy in the Age of Facebook: Discourse, Architecture, Consequences. Perth, Australia: Curtin University.
Rössler, B. (2007). The Value of Privacy. In G. Stocker and C. Schöpf (Eds.), Goodbye privacy - Ars Electronica 2007, p. 39-44. Ostfildern-Ruit: Hatje Cantz Verlag: 39-44, 26.
Snyder, D. (2007). The NSA's "General Warrants: How the Founding Fathers Fought an 18th Century Version of the President's Illegal Domestic Spying". Retrieved September 2, 2011, from http://www.eff.org/files/filenode/att/generalwarrantsmemo.pdf.
Solove, D. (2012). Privacy Self-Management and the Consent Dilemma. Harvard Law Review 126: 1880-1903.
Thagaard, T. (2004). Systematic approaches and Empathy. An introduction to qualitative methods. Copenhagen: Akademisk Forlag.
The European Parliament, the European Council, et al. (2007). Charter of Fundamental Rights of the European Union. Brussels: EC.
The White House (February 23, 2012). Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy. Washington: The White House.
TNS Opinion & Social (June 2011). Special Eurobarometer 359: Attitudes on Data Protection and Electronic Identity in the European Union. Brussels: European Commission.
Tufekci, Z. (2007). Can You See Me Now? Audience and Disclosure Regulation in Online Social Network Sites. Bulletin of Science, Technology & Society Bulletin of Science, Technology & Society 28(1): 20-36.
United Nations (1948). The Universal Declaration of Human Rights. New York: United Nations.
United Nations (1966). International Covenant on Civil and Political Rights. New York: United Nations.
United Nations General Assembly (December 18, 2013). Resolution adopted by the General Assembly. The right to privacy in the digital age. New York: United Nations.
United Nations Human Rights Council (March 21, 2011). Report of the Special Representative John Ruggie. Guiding Principles on Business and Human Rights: Implementing the United Nations 'Protect, Respect and Remedy' Framework. New York: United Nations.
Westin, A. F. (1967). Privacy and freedom. New York: Atheneum.
Winston, M. (2007). "Human Rights as Moral Rebellion and Social Construction. Journal of Human Rights 6 (3): 279-305.
1. Brin’s book is not focused on youth practices as such but entails a general account of the proclaimed erosion of privacy. Brin argues that the right to privacy is outdated and contradict online social practices by which personal information is widely exposed and shared across various platforms. In response he suggests to deconstruct the entire notion of privacy and shift focus to accountability (Brin, 1998).
2. In a European context, related studies on online experiences and risks include the EC funded research project EU Kids Online (focus on kids), available at: http://www.lse.ac.uk/media@lse/research/EUKidsOnline/Home.aspx; the EC funded research project CONSENT (focus on internet users more broadly), available at http://consent.law.muni.cz/view.php?cisloclanku=2013040002, and the Special Eurobarometer 359 from 2011 (TNS Opinion & Social, June 2011). Internationally, the Internet Society has conducted a Global Internet User Survey in 2012, which among other addresses how often users read the privacy policies of online services, available at: http://www.internetsociety.org/apps/surveyexplorer/online-privacy-and-id.../. In a Danish context, Bechmann (Bechmann, 2014) has studied ‘consent cultures’ on Facebook amongst 15 high school students.
3. According to the Council of Europe Convention of 1981 for the protection of individuals with regard to automatic processing of personal data, ‘personal data’ is defined as any information relating to an identified or identifiable individual (Council of Europe, 1981).
4. The right to privacy is also stipulated in Article 7 and 8 of the EU Charter on Fundamental Rights, binding upon EU member states (The European Parliament, the European Council et al., 2007)
5. See, for example, the Madrid Privacy Declaration, that reaffirms international instruments for privacy protection and call for actions. The declaration is signed by a broad range of scholars and civil society organisations (Public Voice, 3 November 2009).
6. Retrieved August 10, 2014 from http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf]/fn|, the modernisation of the Council of Europe’s Convention on the Protection of Individuals with regard to Automatic Processing of Personal DataRetrieved August 10, 2014 from http://www.coe.int/t/dghl/standardsetting/dataprotection/modernisation_en.asp
7. See article 88 of the Opinion of the Data Protection Supervisor, 14 January 2011. Retrieved August 10, 2014 from http://www.edps.europa.eu/EDPSWEB/webdav/site/mySite/shared/Documents/Consultation/Opinions/2011/11-01-14_Personal_Data_Protection_EN.pdf
8. Quote from article 84 of the Opinion of the Data Protection Supervisor, 14 January 2011. Retrieved August 10, 2014 from http://www.edps.europa.eu/EDPSWEB/webdav/site/mySite/shared/Documents/Consultation/Opinions/2011/11-01-14_Personal_Data_Protection_EN.pdf
9. Digital Youth was created in 2013 as a platform for gathering knowledge and raising awareness concerning youth practices and perceptions in relation to social media. See www.digitaleunge.dk Retrieved 14 August 2014. The author of this article is an employee of the Danish Institute for Human Rights.
10. The results of the quantitative survey are available (in Danish). Retrieved August 14, 2014 from http://digitaleunge.files.wordpress.com/2013/06/teenagere-deres-private-og-offentlige-liv-pc3a5-sociale-medier.pdf
11. The author participated in the study design, data collection and data analysis together with Werner Leth and Gry Hasselbach from the Danish Media Council. For a (Danish) report on the findings of the study please refer to (Jørgensen, Hasselbach et al. November 2013).
12. The right to remedy - stipulated in, for example, Article 13 of the European Convention of Human Rights - implies that remedies should be accessible, affordable and capable of providing appropriate redress (Council of Europe, 16 April 2014:6).
14. In 2014, the first independent audit of Google, Microsoft and Yahoo’s compliance with GNI norms has been conducted. The assessment report of 8 January 2014 is available. Retrieved January 14, 2014 from http://globalnetworkinitiative.org/news/gni-report-finds-google-microsoft-and-yahoo-compliant-free-expression-and-privacy-principles
15. Response given by Viktor Mayer-Schönberger at “Big Data” research seminar, Copenhagen Business School, 18 September 2013.
Nancy Scola reporting in the Washington Post: "The latest battle over who should run the Internet will be waged in the South Korean port city of Busan over the next three weeks. For U.S. officials headed to the United Nation's International Telecommunication Union's Plenipotentiary Conference, the goal is simple: prevent a vote. In short, the State Department's approach is this: Convince the representatives of the other 192 member countries attending the conference that the 150-year-old U.N. technical body is the wrong forum for existential questions about how the Internet should work."
Follow CircleID on Twitter
This month, we are seeing a very busy global ecosystem with the ICANN 51, UN General Assembly meeting to discuss ICT for Development in New York and now the 19th ITU Plenipotentiary in Busan. Pinktober, Oktoberfest has also become saturated with ICTober so it makes me more reflective. First I would like to make a massive shout out to all those battling cancer, survivors and families who wage war against cancer. May you all walk on and walk strong!
Throwbacks from the UNGA 2nd Committee Meeting Discussions on ICT4D
I was really pleased to hear mobile technology being used to fight Ebola. Clearly, the use of ICT to manage global Health Crisis is gaining momentum. Israel mentioned how the Israeli App About Ebola has been downloaded over 5000 times in West Africa and is available in the Jola, Krio, Liberian English and Wolof languages.
Sri Lanka shared how their Island-wide rural telecenter network called "Nenasala" or "wisdom outlets" of over 750 centers is a people centric ICT knowledge disseminating mechanism that mainstreams indigenous knowledge, content development, delivering e-Government services in the local languages based on public private partnerships works. Sri Lanka remarked how women and youth rural leaders are the backbone of this Network.
China shared that since officially gaining access to the Internet twenty years ago, its internet connectivity has grown exponentially with over 600 million internet users and 3 million websites. China's e-commerce sales in the first 6 months of the years reached 5.66 trillion RMB yuan representing an annual increase of 30.1%. It hosts 4 of the world's top ten internet companies and the Internet industry continues to grow and an annual rate of 30%.
Mankind's natural tendency is that the masses are often generally resistant to change. The leaders of technological movements today can be in danger of becoming irrelevant tomorrow for simply refusing to evolve, change and adapt. At times those that would be considered leaders of the current technological phenomenon would look down on the early developments of the "new, weird, unusual" but innovators who are committed to plunging into the deep, with faith of what they perceive can be built, despite criticisms, experimentation, failure have often birthed new phenomena.
What makes countries or companies different in terms of growth? The ability to always be learning, probing, discovering is key to maintaining relevance even when one has reached what can be sufficiently perceived as the height of today's success. As we see the following migratory tracks:
- From Telegraphs to Telephones
- From Telephones to Voice over Internet Protocols
- From circuit switching to packet switching
- From domains to dotless domains
- From Internet 1.0 in 1969 to Internet 4.0
One wonders what is in store for us. With the constant technological advancements in the world, we know for certain that innovation will come from those that are hungry to innovate, those that refuse to settle but keep investing, researching and building. We know that some things are constant, and these include things like competition, battle for control but this is a good time to reflect and focus on the things that matter.
I thought I would share this story as originally told by William Von Allven in 1998.
DePew's Fatal Mistake — Lessons from Western Union and the Telephone?
When Bell first attempted to advocate for Voice Telephony, the leaders of the former move of Technology, the President of the Telegraph Company ridiculed the potential of voice Telephony.
See extract from Warren Bender, of A.D. Little, Inc. as published in an early issue of the Transactions of the IEEE Systems, Man & Cybernetics Society (source):
In 1876, Alexander Graham Bell and his financial backer, Gardiner G. Hubbard, offered Bell's brand new patent (No. 174,465) to the Telegraph Company — the ancestor of Western Union. The President of the Telegraph Company, Chauncey M. Depew, appointed a committee to investigate the offer. The committee report has often been quoted. It reads in part:
"The Telephone purports to transmit the speaking voice over telegraph wires. We found that the voice is very weak and indistinct, and grows even weaker when long wires are used between the transmitter and receiver. Technically, we do not see that this device will be ever capable of sending recognizable speech over a distance of several miles.
"Messer Hubbard and Bell want to install one of their "telephone devices" in every city. The idea is idiotic on the face of it. Furthermore, why would any person want to use this ungainly and impractical device when he can send a messenger to the telegraph office and have a clear written message sent to any large city in the United States?
"The electricians of our company have developed all the significant improvements in the telegraph art to date, and we see no reason why a group of outsiders, with extravagant and impractical ideas, should be entertained, when they have not the slightest idea of the true problems involved. Mr. G.G. Hubbard's fanciful predictions, while they sound rosy, are based on wild-eyed imagination and lack of understanding of the technical and economic facts of the situation, and a posture of ignoring the obvious limitations of his device, which is hardly more than a toy… .
"In view of these facts, we feel that Mr. G.G. Hubbard's request for $100,000 of the sale of this patent is utterly unreasonable, since this device is inherently of no use to us. We do not recommend its purchase."
Bell went on to obtain controlling interest in Western Union by 1882.
Written by Salanieta Tamanikaiwaimaro, Director of Pasifika Nexus
Follow CircleID on Twitter
Volume and Number of Breach Incidents, 2005‐2014 – Source: CMDS
The summary of findings from preliminary analysis reveals that over the last decade —
• Some 229 data breach incidents involved the personal records of people in Europe. Globally, all these incidents resulted in the loss of some 645 million records, though not all of these breaches exclusively involved people in Europe. Within Europe, we confirmed 200 cases involving people in Europe, and 227 million records lost in Europe-specific breaches.
• The total population of the countries covered in this study is 524 million, and the total population of internet users in these countries is 409 million. Expressed in ratios, this means that for every 100 people in the study countries, 43 personal records have been compromised. For every 100 internet users in the study countries, 56 records have been compromised.
• Fully 51 percent of all the breaches involved corporations and 89 percent of all the breached records were from compromised corporations. Among all the kinds of organizations from which personal records have been compromised, 41 percent of the incidents involved clear acts of theft by hackers, but 57 percent of the incidents involved organizational errors, insider abuse, or other internal mismanagement (2 percent unspecified).
• The level of sophistication and detail in journalism about issues of privacy and personal data has increased, but is largely driven by national "mandatory reporting" rules in particular countries. In other words, we know most about data leaks in countries where organizations are required to report that personal records have been compromised.
Follow CircleID on Twitter
ICANN have taken a solid stance in regards to contention sets, with those yet to be resolved soon to be forced into auctions of last resort in the coming months. As expected, this has increased the velocity of private settlements between applicants, either via deals or private auctions.
It seems like most applicants (wisely) don't want to see their funds going into ICANN coffers unnecessarily.
While the prices paid for TLDs at private auction are a closely guarded secret, talk abounds in industry circles of prices approaching US$20 million for some contention sets.
Are these prices an outstanding investment or sheer lunacy?
The answer lies in being able to implement a strategy that generates solid revenues, whilst understanding the true costs of running a TLD.
Take for example the .sex TLD which was recently reported as having sold for USD 3 million. Intuitively this could appear to be a bargain for perpetual ownership of such a strong keyword TLD, considering the size of the industry, and the fact that directly comparable but much less flexible assets sex.com and sex.xxx sold for $13 million and $3 million respectively.
Or was the price tempered given potential concerns of 'unexpected' delays or political concerns such as those that impacted the .xxx TLD or queries over the competitive impacts of .xxx, .adult, .porn etc.?
While domain industry hyperbole over auction prices may be no more than scuttlebutt, there can be no denying that there have been some exceptionally high auction prices through the transparent ICANN auction of last resort process, such as .vip ($3M), .buy ($4.5M) and .tech ($6.7M).
What price is too high to pay?
At some point, without an amazingly viral marketing campaign and a magically cheap operating plan, the operations of your TLD can send you broke within a short matter of time.
Having being tasked by some applicants to assist in this very issue, I'd like to share the first two questions I am generally asked when sitting down with customers to define a TLD auction strategy:
- How do you appropriately value the asset to gain enough capital to win at auction?; and
- At what price does this TLD become unsustainable in terms of ROI?
The answer to both of these questions can only be divined after comprehensive analysis of both sides of the ledger; the potential revenues AND the real-world costs. Each has their own significant considerations.
Calculating forecasted registrations from Sunrise and ten years of operating is relatively simple.
However, smart applicants are thinking beyond just x% of the total target market * wholesale price and realizing that the real benefit of operating a TLD is in finding the hidden value of these complex assets.
The value in the key partnerships, spinoff properties, premium domain name sales and associated businesses (just to name a few) which will make far more revenue than just domain name sales.
The second part of any good analysis is costs.
There is no such thing as a free lunch, and in the case of a TLD, you've got the obvious costs such as your registry, marketing and registrar management, and the not so obvious including managing ICANN compliance and dealing with an increasingly volatile regulatory environment.
Each of these has the potential to send your business spiraling backwards if not managed correctly.
Understanding and predicting all of these cost centres is one of the most important elements of working out your TLD's potential ROI. To effectively complete this task, you really need the insight of folks that have been managing TLDs for many years.
Firstly, you're not alone here. If all of this applies to you, you can rest assured that it's impacting your competitors too.
However, it is time for you to get serious. If your auction strategy can achieve more ROI than your competitors, then you'll enter the auction with a strategic advantage that could prove the difference in your one shot at securing your TLD
A good auction strategy relies on two fundamental principles:.
- Knowing what value the TLD represents to you
- Knowing what value the TLD represents to your competitors
If you aren't absolutely certain you know the answer to the two elements above, you might be blowing your one and only chance.
Having a clear vision, a strong auction strategy and some help from those with experience in the process will ultimately decide whether you walk away on auction day with big frown or a profitable TLD.
Written by Ryan Baker, Domain Name Industry Consultant at ARI Registry Services
Follow CircleID on Twitter
Democracy at its simplest and most basic is governance by and for the people. Of course, there are a variety of conventions and values that are often invoked in the context of “democratic governance” and particularly for “democratic governments”, but democracy as governance by and for the governed would seem to be sufficient as a definition and particularly in the absence of formal structures, rules, behaviours or governmental structures.
I’ve elsewhere discussed how various instances of Multi-stakeholderism (MSism) have operated in the absence of or even in opposition to conventional understandings of democracy. However, continuing discussion and evolution in the way in which governance concerning the global Internet is being conceptualized is suggesting an approach to this governance which involves “democratic multi-stakeholderism” (DMsism). This, it is being suggested, may be one method of squaring the circle where the historical circumstances of Internet development – largely but not exclusively through multi-stakeholder processes primarily driven and controlled by those with a technical interest and responsibility for Internet development are perceived as being necessary for the continued well-being of the Internet as it enters into an increasingly complex and politicized environment. This, it is argued is particularly the case as matters of “Internet Governance” shift focus from largely technical issues to issues involving broad areas of public policy as impacted by actions by and on the Internet.
The difficulty with creating or even conceptualizing a “democratic multi-stakeholderism” is that at it’s core MSism is not “democratic”. Thus the governance notion implicit in MSism is one where governance is by and for those with a “stake” in the governance decision thus shifting the basis of governance from one based on people and (at least indirectly) citizenship or participation in the broad community of the governed to one based on “stakes” i.e. an “interest” in the domain to which the governance apparatus is being applied. The historical notion of “stake” in a context such as this one generally refers to a financial or ownership interest in the area under discussion but in the evolving Internet Governance sphere (and others) this has been extended to include a “technical stake” (as in a professional interest) or even a “normative stake” as in ensuring an outcome which is consistent with one’s values or norms.
What is not included in any of the conventional approaches to MSism however, are broad notions of democratic participation (or accountability) i.e. where the governance is structured so as to include for example, those without a “direct” stake in the outcomes but who nevertheless might as a consequence of their simple humanity be understood to be impacted by the decisions being taken. Discussions around these matters are often dealt with within the MS community by talking about the need (or not) to include (technology/Internet) “users” as “stakeholders”. I’ve looked at that discussion elsewhere and argued that when it comes to the current status of the Internet we are all i.e. all of humanity, now in one way or another being impacted either directly or indirectly by the Internet and in that sense we are all “stakeholders” in how the Internet is framed and enabled in its future evolution (i.e. “governed”).
By extending “stakeholder” status to “users” and then recognizing that we are all in some way “Internet users” the problem of DMSism, some argue may be solved. The problem however, remains in that a MS approach as currently being proposed involves a degree of equality of participation/influence by each of the stakeholder groups (in the Internet Governance jargon–“equal footing“) which would in this instance mean that for example, decisions made where the private sector or government or the technical community etc. was highly influential would not by definition be governance decisions made by the governed except in the trivial sense that since those stakeholder groups also consist of people then all decisions would of course all be made by “people” whatever their (temporary) stakeholder status.
To me it is quite clear that “democratic governance” and “multi-stakeholder governance” are internally in contradiction with each other. At their core, democracy as in the “rule of the people” is one form of government and multi-stakeholderism as in” the rule of “stakeholders”” is another and competing form. I don’t think that they can be reconciled.
Some are arguing that elements of Participatory Democracy (PD) may provide the appropriate direction and this certainly may be the case. However, current experience with PD suggests that there is considerable need for maturation in these processes and particularly in developing means for effective and efficient decision making and for scaling from localized small scale to larger processes.
What I do see as being possible and which is where I think our collective thinking should go is toward redefining how democratic governance can/should operate in the Internet era and particularly (or at least initially) in the “governance” of the global Internet. The Internet “has changed” everything including how we can and should govern ourselves and the various aspects of our daily and collective lives both by changing how we live those lives and by changing how we are able to act and project ourselves in our lived and collective worlds both physically and virtually. But to do this we need to evolve our institutions and mechanisms of governance, not by discarding our current ones such as democracy which has done so much to enable, empower, and enrich the lives of all who have access to this but by allowing and facilitating an evolution in those institutions and mechanisms to take advantage of the new opportunities that technology provides and to respond to the new risks and challenges which technology has equally presented to us.
The list of those opportunities and challenges is a long and growing one and our first task is to develop the means for assimilating and responding to these. A first step in this long road is to begin the process of identification of the issues which need to be addressed in these revised mechanisms for democratic governance in the Internet era:
1. The need for a means to incorporate technical expertise and those who consider themselves neutral technical stewards of various aspects of the Internet into mechanisms for Internet governance and to broaden the base of this stewardship to include those from a wide diversity of backgrounds and interests
2. Finding ways of responding in our strategies and mechanisms of governance to the speed of technology change and the unpredictability of the impacts of these changes including through economic and social redistribution, disruption of production systems and employment, huge transfers and accumulations of wealth (and power), among others
3. Recognizing the apparent disengagement of large numbers of the population from current conventional governance and representative processes
4. Reacting to and finding ways of incorporating the apparent desire for direct (disintermediated) engagement of large numbers of the population in current informal technology mediated processes associated with the management of various activities associated with daily living particularly in developed societies
5. Taking as a necessary challenge finding ways of resolving the escalating divides in the technology sphere including between those who have and are able to use online systems for purposes of engagement and those who are not or less able because of issues of location, income, gender, technical and other forms of literacy among others
6. Finding mechanisms to respond to the globalization of the nature of the decision making/consultation which needs to be undertaken given the globalized nature of the issues/technology
7. Developing the fortitude to not be intimidated by the extreme significance of the matters under discussion given the vast economic, political, strategic and security interests among others now impacted by the Internet and digital platforms overall, thus increasing the likelihood even inevitability of attempts at undemocratic subversion of democratic processes in support of one or another corporate or national interest
8. Recognizing and celebrating the opportunity for using digital means to extend opportunities for effective participation, for enhancing the quality of decision making through information provision and support for dialogue
The previous blog post described the progress of the IANA transition in the IETF-led protocols space and the lack of progress in the numbers space. Feeling the pressure, the Numbers Resource Organization has since announced a more developed process for the numbers space.
The names part of the IANA transition is the outlier. A process is in place, but no one has any idea what it will produce or how the results will mesh with the proposals for protocols and numbers. Both the protocols and the numbers communities have arms-length and potentially sever-able relationships with ICANN. The names community on the other hand is a wholly owned subsidiary of ICANN, Inc., which both runs the policy process (some would say that its staff and board play a major role shaping what the policy is) and the IANA functions that implement the policies in the root zone. The NTIA IANA functions contract is the only thing that requires separation between the policy making apparatus of ICANN and the IANA implementation – and the NTIA is going away. Add to that the fact that the domain name market is where most of the money and politics of Internet governance are concentrated, and thus attracts the most attention from governments.
While the IETF and numbers people seem to be pretty happy with the services they are getting from ICANN’s IANA, most of the stakeholders in the names space are mistrustful of ICANN; they believe that its powers need to be limited and its board made more accountable to the community. The names environment is complicated further by the distinction between generic top level domain registries (gTLDs) and country code top level domains (ccTLDs). gTLDs such as .com and .org are based on elaborate contracts with ICANN. As the entity that “licenses” them to run a top level domain, ICANN can impose expensive and burdensome obligations on gTLD registries with its contracts and policies, and ICANN also extracts substantial fees from them.
Hardly any of the ccTLDs, on the other hand, have contracts with ICANN. They merely rely on it to update their data in the global root zone. The ccTLDs’ main worry is about how ICANN handles redelegation requests – i.e., requests to transfer control of the country code domain from one party to another. Currently, it is the IANA that receives and processes these requests. In redelegation requests, the line btween policy making and implementation can get a bit blurry, as the IANA must decide, in effect, whether a request to shift control of the country domain is justified. To the ccTLD registries ICANN is only a latent threat, but a serious threat nonetheless. ccTLDs want to make sure that the transition does not give ICANN any more centralized power over them; they will probably be seeking institutionalized safeguards regarding redelegations and other things affecting their autonomy.
Thus while they are different in so many other ways, both ccTLDs and gTLDs see the accountability of ICANN’s policy process as a life or death issue. They do not see ICANN as a largely innocuous home of a highly technical and usually well-run coordination service. The names community not only wants to make sure that the transition results in an accountable, efficient and secure IANA, it also wants to use the IANA transition as a point of leverage to get major reforms in the California corporation’s governance structure. As a result, the names community has not only convened a process to develop a proposal in response to the NTIA’s call for a stewarship transition (The Cross-Community Working Group on the IANA transition or CWG-IANA) it has also set up a separate process to come up with proposals for improving ICANN’s accountability and reforming its governance (the ICANN Enhanced Accountability and Governance group). It has then broken down the work of the Enhanced Accountability and Governance group into two tracks: track 1 concentrating on accountability reforms that must be made before the IANA transition, and track 2 referring to reforms that can wait until after the transition.
These different requirements make it likely that the IANA functions will split, with names separated from the protocol and numbers functions. The possibility was made clear by InternetNZ’s Jordan Carter, who stated on the Onenet list:
If we are going to have a successful transition, it’s really important for the numbers and protocols folks to understand that: a) they have superior accountability situations to the names people today; b) the names people cannot copy number/protocol accountability mechanisms because they aren’t organised outside ICANN; c) it isn’t possible for names to organise outside ICANN in the way numbers/protocol people do; d) there may need to be structural changes or new bodies to provide a workable settlement for names; e) without a workable settlement for names, there isn’t going to be a transition.
While Carter is right that a transition that satisfies the protocols community will do little or nothing to fix the problems faced by the names communities; it is also true that the reforms called for by the names community could have huge side effects on the environment in which the protocols-related IANA functions operate.
Responding to Carter from the perspective of the technical community, Daniel Karrenberg, one of the founders of the European numbers registry RIPE-NCC, said:
What incentive would there be [for the protocols and numbers people] to agree to additional mechanisms designed specifically to address names’ issues? What if these are perceived to add unnecessary complications for the working mechanisms in the protocols/numbers area? Would it surprise you if a perception arose in the protocols/numbers communities that they are being ‘held hostage’ by the names communities?
Here are a couple of (not impossible) scenarios that illustrate the problem.
Suppose that in the future ICANN/IANA take over the Root Zone Maintenance Function as well as the current IANA functions. But suppose its operations fail to live up to current reliability standards, leading to major dissatisfaction in the names community. But suppose that the IETF is perfectly happy with ICANN’s performance of the protocl registry IANA functions. The names peple want a new IANA, the IETF people don’t. Who wins?
Here is a wilder (but again, not impossible) scenario. Suppose that, in order to control domain name policies and to capture more of the ample revenues the industry generates, the GAC succeeds in taking control of ICANN from within. ICANN’s policy process becomes, for all practical purposes, an intergovernmental or multilateral one. It then leverages its control of IANA to put pressure on IETF to become part of the ITU. Obviously, IETF would seek to move its protocol registry away from ICANN at that point – but suppose ICANN resists this, refusing to yield the IANA.org domain name and the IANA trademark that it currently holds. Who is really the IANA at that point?
Trying to keep them all together in the same organization as ICANN’s policy process may be a threat to the stability of the former (protocols) and a nettlesome constraint on the solution set of the latter (names).
To conclude, the twin assumptions that 1) the IANA functions operator must be a single entity combining names, protocols and numbers, and 2) that the single entity should be inside a corporation that is responsible for policy making for domain names not only has practical problems – it poses major risks to the future autonomy of the internet.
Although it has helped to uncover these dilemmas, the ICG process of allowing each community to develop its own IANA transition proposal actually minimizes these dilemmas. If all of these communities were thrown into the same cauldron to work out their desired transition plan, the temperature would surely rise even higher and the conflicts would be even more intense.