Late Wednesday September 28 the U.S. Congress passed a short-term funding bill to avert another government shutdown. The bill did not include any restrictions, prohibitions or riders related to the ICANN transition. As an organization that has been consistently advocating the end of unilateral U.S. government control for more than a decade, the Internet Governance Project enthusiastically welcomes this historic event.
In the debate over the future of ICANN, one of the most important points about the U.S. plan to end its control almost got lost in the noise.
The transition is not “giving the Internet away,’ neither to foreign governments nor to ICANN. It is giving the Internet to the people – the people who use it, operate its infrastructure and run its services. The people of the Internet – the ‘global multi-stakeholder community’ to which the Commerce Department referred in March 2014 when it kicked off the stewardship transition – are not confined to the United States. They are everywhere. If freedom entails the right to self-governance, then the transition promotes and advances it.
The Internet protocols were created 35 years ago to provide universal compatibility in data communications. In pursuit of that goal, the software was designed in a way that simply did not refer to national boundaries or governmental jurisdictions. As one Internet engineer put it, “it’s not being rude, they just weren’t relevant.”
To remain true to that vision, a nongovernmental, global regime for governing the domain name system (DNS) was created. That approach was favored by the Internet technical community, most internet businesses, and by most Republicans and Democrats for the past two decades. The idea behind ICANN was to keep policy making for the global DNS out of the hands of governments and intergovernmental organizations so that the rules governing domains would not be fragmented by jurisdiction and burdened by geopolitics and censorship. The only way to do that was to create a new, transnational regime based on non-state actors, with a balanced scheme of representation for individuals, civil society, business and other stakeholder groups. It has not been an easy task to create this regime, but now it is done. Or rather, this is the end of the beginning.
The rough ride through Congress is Exhibit A in the case for ending unilateral U.S. control. ICANN and Internet’s naming infrastructure became a domestic political football, drawn into short-term partisan politics, special-interest funding, opportunistic shifts of position, a means of whipping up nationalistic hubris and xenophobic fears. Meanwhile, the 91% of the rest of the world’s internet users who are not in the U.S. stood by, unrepresented and helpless.
But it is also true that this innovation in global governance could have come only from this country. Only the U.S. had the vision and values to propose a form of Internet governance led by non-state actors. The implicit ideal behind the new regime was the principle of popular sovereignty, the original concept behind democratic national governments but extended to a global scale. Americans should be proud of that accomplishment as reflecting – and expanding into a new, globalized realm – the revolutionary principles of self-governance upon which it was founded. The ICANN transition is the final step in the institutionalization of this nongovernmental regime.
Critics who claimed that the transition would “give the internet to foreign governments” were not just wrong, they were twisting the transition into its opposite. In 2005, during the World Summit on the Information Society, authoritarian governments were very hostile to the idea of ICANN. They knew how revolutionary this new institution was. They wanted governments to be in control. China, Brazil, Iran, Russia, Saudi Arabia and even some European governments thought that public policy for the Domain Name System should be made by nation states, not by a new, open, nongovernmental agency. When Senator Cruz and his supporters called for U.S. control of the Internet they sound a lot like those governments. Their logic and their arguments were the same.
They didn’t seem to understand that you cannot give special powers over a global communications infrastructure to one government without giving all other governments the idea that they should also share some control. Sovereign equality is a basic principle of international relations. If the opponents of the transition had succeeded in blocking it based on claims that the Internet belongs to the US, they would have pushed us back into the world of nation-states, profoundly undermining the cause of Internet freedom.
Just as the democratic revolutions of the 18th and 19th centuries made it clear that “the price of liberty is eternal vigilance,” we must now be prepared to keep ICANN, Inc. under constant scrutiny. Its new accountability arrangements have yet to be tested, and simply will not work unless the engaged community insists that the corporation adheres to them.
Sitting at the edge of the network and rarely configured or monitored for active compromise, the firewall today is a vulnerable target for persistent and targeted attacks.
There is no network security technology more ubiquitous than the firewall. With nearly three decades of deployment history and a growing myriad of corporate and industrial compliance policies mandating its use, no matter how irrelevant you may think a firewall is in preventing today's spectrum of cyber threats, any breached corporation found without the technology can expect to be hung, drawn, and quartered by both shareholders and industry experts alike.
With the majority of north-south network traffic crossing ports associated with HTTP and SSL, corporate firewalls are typically relegated to noise suppression — filtering or dropping network services and protocols that are not useful or required for business operations.
From a hacker's perspective, with most targeted systems providing HTTP or HTTPS services, firewalls have rarely been a hindrance to breaching a network and siphoning data.
What many people fail to realize is that the firewall is itself a target of particular interest — especially to sophisticated adversaries. Sitting at the very edge of the network and rarely configured or monitored for active compromise, the firewall represents a safe and valuable beachhead for persistent and targeted attacks.
The prospect of gaining a persistent backdoor to a device through which all network traffic passes is of insurmountable value to an adversary — especially to foreign intelligence agencies. Just as all World War I combatant sides sent intelligence teams into the trenches to find enemy telegraph lines and splice-in eavesdropping equipment, or the tunnels that were constructed under the Berlin Wall in the early 1950s to enable U.K. and U.S. spy agencies to physically tap East German phone lines, today's communications traverse the Internet, making the firewall a critical junction for interception and eavesdropping.
The physical firewall has long been a target for compromise, particularly for embedded backdoors. Two decades ago, the U.S. Army sent a memo warning of backdoors uncovered in the Checkpoint firewall product by the NSA with advice to remove it from all DoD networks. In 2012, a backdoor was placed in the Fortinet firewalls and products running their FortiOS operating system. That same year, the Chinese network appliance vendor Huawei was banned from all U.S. critical infrastructure by the federal government after numerous backdoors were uncovered. And most recently, Juniper alerted customers to the presence of unauthorized code and backdoors in some of its firewall products — dating back to 2012.
State-sponsored adversaries, when unable to backdoor a vendor's firewall through the front-door, are unfortunately associated with paying for weaknesses and flaws to be introduced — making it easier to exploit at a later date. For example, it is largely reported that the U.S. government paid OpenBSD developers to backdoor their IPsec networking stack in 2001, and in 2004, $10 million was reportedly paid to RSA by the NSA to ensure that the flawed Dual_EC_DRBG pseudo-random number-generating algorithm be the default for its BSAFE cryptographic toolkit.
If those vectors were not enough, as has been shown through the Snowden revelations in 2013 and the Shadow Brokers data drop of 2016, government agencies have a continuous history of exploiting vulnerabilities and developing backdoor toolkits that specifically target firewall products from the major international infrastructure vendors. For example, the 2008 NSA Tailored Access Operations (TAO) catalogue provides details of the available tools for taking control of Cisco PIX and ASA firewalls, Juniper NetScreen or SSG 500 series firewalls, and Huawei Eudemon firewalls.
Last but not least, we should not forget the inclusion of backdoors designed to aid law enforcement — such as "lawful intercept" functions — which, unfortunately, may be controlled by an attacker, as was the case in the Greek wire-tapping case of 2004-2005 that saw a national carrier's interception capabilities taken over by an unauthorized technical adversary.
As you can see, there is a long history of backdoors and threats that specifically target the firewall technologies the world deploys as the first-pass for security to all corporate networks. So is it any surprise that as our defense-in-depth strategy gets stronger, and newer technologies maintain a closer eye on the threats that operate within all corporate networks, that the firewall becomes an even more valuable and softer target for compromise?
Firewalls are notoriously difficult to protect. We hope that they blunt the attacks from all attackers with the (obviously false) expectation that they themselves are not vulnerable to compromise. Now, as we increasingly move into the cloud, we are arguably more exposed than ever to backdoors and exploitation of vulnerable firewall technologies.
Whether tasked with protecting the perimeter or operations within the cloud, organizations need increased vigilance when monitoring their firewalls for compromise and backdoors. As a security professional, you should ensure you have a defensible answer for "How would you detect the operation of a backdoor within your firewall?"
Written by Gunter Ollmann, Chief Security Officer at Vectra
Follow CircleID on Twitter
A few months ago I published a blog post about Verisign's plans to increase the strength of the Zone Signing Key (ZSK) for the root zone. I'm pleased to provide this update that we have started the process to pre-publish a 2048-bit ZSK in the root zone for the first time on Sept. 20. Following that, we will publish root zones with the larger key on Oct. 1, 2016.
To help understand how we arrived at this point, let's take a look back.
Beginning in 2009, Verisign, the Internet Corporation for Assigned Names and Numbers (ICANN), the U.S. Department of Commerce, and the U.S. National Institute of Standards and Technology (NIST) came together and designed the processes and plans for adding Domain Name System Security Extensions (DNSSEC) to the root zone. One of the important design choices discussed at the time was the choice of a cryptographic algorithm and key sizes. Initially, the design team planned on using RSA-SHA1 (algorithm 5). However, somewhat late in the process, RSA-SHA256 (algorithm 8) was selected because that algorithm had recently been standardized, and because it would encourage DNSSEC adopters to run the most recent name server software versions.
One of the big unknowns at the time revolved around the size of Domain Name System (DNS) responses. Until DNSSEC came along, the majority of DNS responses were relatively small in size and could easily fit in the 512-byte size limit imposed by the early standards documents (in order to accommodate some legacy internet infrastructure packet size constraints). With DNSSEC, however, some responses would exceed this limit. DNS operators at the time were certainly aware that some recursive name servers had difficulty receiving large responses, either because of middleboxes (e.g., firewalls) and gateways that (incorrectly) enforced the 512-byte limit, blocked IP fragments or blocked DNS over Transmission Control Protocol (TCP). This uncertainty around legacy system support for large packets is one of the reasons that the design team chose to use a 1024-bit ZSK for the root zone, and also why NIST's Special Publication 800-57 Part 3 recommended using 1024-bit ZSKs through October 2015.
A number of things have changed since that initial design. 1024-bit RSA keys have fallen out of favor: The CA/Browser forum, for example, deprecated the use of 1024-bit keys for SSL as of 2013. This event caused many in the community to begin the transition away from 1024-bit keys.
Additionally, operational experience over the years has shown that the DNS ecosystem, and perhaps more importantly, the underlying IP network infrastructure, can handle larger responses due to longer key sizes. Furthermore, there is increased awareness that when DNSSEC signature validation is enabled, a recursive name server might need to rely on either fragmentation of large packets, or the transport of DNS messages over TCP.
Today, more than 1,300 top-level domains (TLDs) are signed with DNSSEC. Of these, 97 are already using 2048-bit RSA keys for zone signing. Furthermore, more than 200 TLDs have recently published zones whose DNSKEY response size exceeds 1500 bytes.
For these reasons, now is an appropriate time to strengthen the DNS by increasing the root zone's ZSK to 2048-bits. Our colleagues at ICANN agree. According to David Conrad, ICANN's CTO, "ICANN applauds Verisign's proactive steps in increasing the length of the ZSK, thereby removing any realistic concern of root zone data vulnerability. We see this, along with ICANN's updating of the Key Signing Key scheduled next year, as critical steps in ensuring the continued trust by the internet community in the root of the DNS."
To raise awareness among the network and DNS operations communities of this improvement to the security of the internet's DNS, we presented our plans at the DNS-OARC, NANOG, IETF, RIPE and ICANN meetings; and will continue to post updates on the NANOG, dns-operations, and dnssec-deployment mailing lists, and share updates through the Verisign blog.
Verify Your Network's Capabilities
It is important to ensure that internet users are able to receive larger responses when they are seen, including signatures from 2048-bit ZSKs. To that end, Verisign has developed a web-based utility that can be used to verify your network and name server's ability to receive larger, signed responses.
If you'd like to ensure your systems are ready for this security upgrade, visit keysizetest.verisignlabs.com to perform the verification. The page will load a small image file in the background from a number of subdomains, each signed with different ZSK and KSK parameters. The results are displayed on a table, and a successful test should look like this:
If you see different results, you should investigate as described on the web page. If you are unable to solve problems related to resolution of domains signed with large DNSSEC keys, send an email to Verisign at firstname.lastname@example.org.
Both Verisign and ICANN have already spent a significant amount of time on development and testing of their systems to support 2048-bit ZSKs. Signatures over the sets of keys have already been generated at two signing ceremonies this year. The next steps are:
- Sept. 20: The first 2048-bit ZSK will be pre-published in the root zone. This follows the normal process for quarterly ZSK rollovers whereby incoming ZSKs are pre-published for a period of approximately 10 days. Should any unforeseen problems arise during this time, Verisign has the ability to "unpublish" the new ZSK and continue using the old (smaller) one.
- Oct. 1: Verisign will publish the first root zone signed with a 2048-bit ZSK. The outgoing 1024-bit ZSK will remain in a post-publish state for approximately 30 days. Similarly, should any unforeseen problems arise during this time, Verisign has the ability to revert to signing with the previous 1024-bit ZSK.
Please take a few moments and verify that your systems are properly provisioned by visiting keysizetest.verisignlabs.com. If you have any concerns that you'd like to make Verisign aware of, please contact us at email@example.com.
Written by Duane Wessels, Principal Research Scientist at Verisign
Follow CircleID on Twitter
Cloud computing is on the rise. International Data Corp. predicts a $195 billion future for public cloud services in just four years. That total is for worldwide spending in 2020 — more than twice the projection for 2016 spending ($96.5 billion).
As a result, companies are flocking to both large-scale and niche providers to empower cloud adoption and increase IT efficacy. The problem? Without proper management and oversight, cloud solutions can end up underperforming, hampering IT growth or limiting ROI. Here are four top tips to help your company make the most of the cloud.
The disconnect between corporate-approved and "shadow" IT services puts critical company data in peril, according to a recent report on cloud data security commissioned by Gemalto, a digital security firm. This gap exists thanks largely to the cloud: Tech-savvy employees used to the kind of freedom and customization offered by their mobile devices often circumvent IT policies to leverage the tools they believe are "best" for the job.
Solving this problem requires a new tech conversation: IT professionals must be willing to engage with employees to ensure all applications running on corporate networks are both communicating freely and actively securing infrastructure from outside threats. By crafting an app ecosystem that focuses on interoperability and input from end users, it's possible to maximize cloud benefits.
How often does your company update essential cloud services and applications? If the answer is "occasionally," you may be putting your cloud ROI at risk. Here's why: As cloud solutions become more sophisticated, so too are malicious actors as they leverage new techniques to compromise existing vulnerabilities or circumvent network defenses. By avoiding updates on the off chance that they may interfere with your existing network setup, you substantially increase your risk of cloud compromise. Best bet? Make sure all cloud-based applications, platforms and infrastructure are regularly updated, and keep your ear to the ground for any word of emergent threats.
For many companies, a reluctance to move to the cloud because of security concerns manifests itself as overuse of manual processes. For example, if you're leveraging a cloud-based analytics solution but still relying on human data entry and verification, you're missing out on significant cloud benefits. This is a widespread issue: Just 16 percent of companies asked said they've automated the majority of their total cloud setup, citing security, cost and lack of expertise as the top holdbacks, according to a recent report by Logicworks and Wakefield Research. Bottom line? One key feature of the cloud is the ability to handle large-scale, complex workloads through automation. Avoiding this in favor of manual "checking" means you're missing out on significant cloud returns.
Solve SLA Issues
Last but not least: Make the most of your cloud deployment by hammering out the ideal service-level agreement (SLA). Right now, there are no hard and fast "standards" when it comes to the language used in SLAs, or the responsibilities of cloud providers. As a result, many SLAs are poorly worded and put vendors in a position to avoid much of the blame if services don't live up to expectations. Avoid this problem by examining any SLA with a critical eye — ask for clarification where necessary and specifics wherever possible, and make sure your provider's responsibility for uptime, data portability and security are clearly spelled out.
Want to make the most of your cloud services? Open the lines of communication, always opt for updates, embrace automation, and don't sign subpar SLAs.
Written by Jeff Becker, Director of Marketing at ATI
Follow CircleID on Twitter
More under: Cloud Computing
The telecoms industry has two fundamental issues whose resolution is a multi-decade business and technology transformation effort. This re-engineering programme turns the current "quantities with quality" model into a "quantities of quality" one. Those who prosper will have to overcome a powerfully entrenched incumbent "bandwidth" paradigm, whereby incentives are initially strongly against investing in the inevitable and irresistible future.
Recently I had the pleasure of meeting the CEO of a fast-growing vendor of software-defined networking (SDN) technology. The usual ambition for SDN is merely internal automation and cost optimisation of network operation. In contrast, their offering enables telcos to develop new "bandwidth on demand" services. The potential for differentiated products that are more responsive to demand makes the investment case for SDN considerably more compelling.
We were discussing the "on-demand" nature of the technology. By definition this is a more customer-centric outlook than a supply-centric "pipe" mentality, which comes in a few fixed and inflexible capacities. What really struck me was how the CEO found it hard to engage with a difficult-to-hear message: "bandwidth" falls short as a way of describing the service being offered, both from a supply and demand point of view.
At present, telecoms services are typically characterised as a bearer type (e.g. Ethernet, IP, MPLS, LTE) and a capacity (expressed as a typical or peak throughput). Whatever capacity you buy can be delivered over many possible routes, with the scheduling of the resources in the network being opaque to end users. All kinds of boxes in the network can hold up the traffic for inspection or processing. Whatever data turns up will have a certain level of "impairment" as delay and (depending on the technology) loss.
This means you have variable levels of quality on offer: a "quantity with quality" model. You are contracted to a given quantity, and it turns up with some kind of quality, which may be good or poor. Generally only the larger enterprise or telco-to-telco customers are measuring and managing quality to any level of sophistication. Where there is poor quality, there may be an SLA breach, but the product itself is not defined in terms of the quality on offer.
This "quantity with quality" model has two fundamental issues.
The first is that "bandwidth" does not reflect the true nature of user demand. An application will perform adequately if it receives enough timely information from the other end. This is an issue of quality first: you merely have to deliver enough volume at the required timeliness. As a result, a product characterised in terms of quantity does not sufficiently define whether it is fit-for-purpose.
In the "quantity with quality" model the application performance risk is left with the customer. The customer has little recourse if the quality varies and is no longer adequate for their needs. Since SLAs are often very weak in terms of ensuring the performance of any specific application, you can't complain if you don't get the quality over-delivery that you (as a matter of custom) feel you are entitled to.
The second issue is that "bandwidth" is also a weak characterisation of the supply. We are move to a world with ever-increasing levels of statistical sharing (packet data and cloud computing), and dynamic resource control (e.g. NFV, SD-WAN). This introduces more variability into the supply, and an average like "bandwidth" misses the service quality and user experience effects of these high-speed changes.
The impact on the network provider is that they often over-deliver in terms of network quality (and hence carry excessive cost) in order to achieve adequate application performance. Conversely, they also sometimes under-deliver quality, creating customer dissatisfaction and churn, and may not know it. Optimising the system for cost or revenue is hard when you don't fully understand how the network control knobs relate to user experience, or what 'success' looks like to the customer.
What the CEO of the SDN vendor found especially challenging was dealing with a factual statement about networks: there is an external reality to both the customer experience and network performance, and aligning to that reality is not merely a good idea, it is (in the long run) mandatory! This felt like a confrontational attack on their "bandwidth on demand" technology and business model.
Confronting this "reality gap" is an understandable source of anxiety. The customer experience is formed from the continual passing of instantaneous moments of application operation. The network performance is formed by the delivery of billions and trillions of packets passing through stochastic systems. Yet the metrics we use to characterise the service and manage it reflect neither the instantaneous nature of demand, nor the stochastic properties of supply. The news that you also needed to upgrade your mathematics to deal with a hyper-dynamic reality only adds to the resistance.
An industry whose core practises are disconnected from both demand and supply inevitably faces some troubles. In terms of demand, users find it hard to express their needs and buy a fit-for-purpose supply. If you are moving to an "on-demand" model, it helps if customers have a way of expressing demand in terms of the value they seek. For managing supply, you need to be able to understand the impact of your "software-defined" choices on the customer, so as to be able to make good ones and optimise cost and QoE.
The only possible resolution is to align with an unchanging external reality, and move to a new paradigm. We need to upgrade our products and supporting infrastructure to a "quantities of quality" model. By making the minimum service quality explicitly defined, we can both reflect the instantaneous nature of user experience demand, and also the stochastic and variable-quality nature of supply.
This is not a trivial matter to execute, given how every operational system and commercial incentive is presently designed to sell ever more quantity, not on aligning supply quality to the demand for application performance.
In the short run, the answer is to shine a brighter light on what quality is being delivered in the existing "bandwidth" paradigm. If you are engineering an SD-WAN, for example, and you have lots of sharp "transitions" as you switch resources around, what is the impact of shifting those loads on the end user? Do you have sufficient visibility of the supply chain to understand your contribution to success or failure in the users' eyes?
In the medium term, the engineering models used by these systems need to make quality a first-class part of the design process. The intended uses need to be understood, the quality required to meet them properly defined, and the operational mechanisms configured to ensure that this is delivered. The science and engineering of performance needs to improve to make this happen, and a lot of operational and business management systems upgraded.
In the long run, the fundamental products and processes need to be changed to a more user-centric model for an on-demand world. Rather than only buying a single broadband service to deliver any and all applications, networks will interface with many cloud platforms that direct application performance through APIs. Those APIs will define the quality being demanded, which the network must then supply.
Success in a software-defined world will not come from repackaging circuit-era products, but from engineering known outcomes for end users with tightly managed cost and risk. New custom will come from being an attractive service delivery partner to global communications and commerce platforms.
Only an improved "quantities of quality" approach can deliver this desirable industry future.
Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd
Follow CircleID on Twitter
More under: Telecom
Search Engines drives traffic to a site that is well ranked for the right keyword
According to a recent study carried out by Custora in the USA, search engines — paid and organic — represent close to 50% of e-commerce orders, compared to 20% for direct entry. A dot brand domain has the potential to boost direct entry, as it can be more memorable than traditional domains. Can dot brand domains also be part of a consistent search engine strategy?
In order to have traffic coming in from search engines, it is necessary to achieve a good ranking: the link in the first position of search engine result page gets 6 times more clicks than the link in the fifth position.
It is also important that the site is optimised for the right keyword, with enough search volume. For instance, "domain name" is searched 33,100 times per month in google.com, while "gTLD" is searched 1600 times. Being first on a search for "domain name" would generate approximately 20 times more traffic than being first on "gTLD".
How important are domain names in search?
The google algorithm is kept secret, and its artificial intelligence enables it to learn from the past behaviours and trends. This artificial intelligence enables google to show a different result page to every user, based on their profile and search history. It is therefore very difficult to have an exact list of criteria that play a role in the search rankings.
Many specialists try to decrypt and anticipate the algorithm. Moz.com is animating a community of more than 2 million specialists, and every year publishes a list of 90 factors influencing search engine rankings. Every factor is weighted from 1 (meaning the factor has no direct influence) to 10 (meaning that the factor has a strong influence on ranking).
Some of these factors do no depend on the keyword. For instance, the number and the quality of inbound links will show that a domain is more authoritative, and that it should therefore be ranked better. Some factors depend on the keyword — such as the number of keyword matches in body text. The global study is available on here on Moz.
The domain name related factors fall into three main categories:
- Factors related to the domain characteristics: These factors include the age of the domain, the duration until expiration, the length of the domain etc.. The corresponding influence score varies from 2.45 to 5.37, which means that they have a relatively low influence.
- Factors related to the execution of the strategy: These factors are much more important, and they depend on how well the domain name is marketed and operated by the brand. The raw popularity of the domain. i.e. the number of links pointing to the domain, has a weight of 7.15, while the quantity of citations for the domain name across the web are weighted 6.26.
- Factors related to the presence of keywords: There are five factors that are directly linked to the presence of the keyword in the domain name, depending if they are in the root domain name, in the extension or if they are an exact match. These factors have a relatively low impact, with a score varying between 2.55 until 5.83, the highest being when the search corresponds to the exact match root domain name.
Dot brand performance review
In order to check the actual performance of the dot brand domains, we performed searches on the brand name and on the second level domain used by the brand. The analysis was performed on the 60 brands that are significantly ranked in the search engine, corresponding to around 250 websites. The full study is available to our members, but here are some of our findings:
- Presence of dot brand domains when searching for the brand name:
- 13% of the searches resulted in a "dot brand" domain names on the first position. Home.cern, group.pictet or engineeringandconstruction.sener are among these domains.
- 65% of the first result page included at one dot brand domain.
- Local and global optimization:
- Barclays TLD is targeting international customers more that local UK: dot barclays is better ranked on google.com than on google.co.uk.
- Inversely, Praxi targets Italians more than international customers; dot praxi is better ranked on google.it than on google.com
- Brand and other keywords:
- BNPParibas and Sener have optimised their search on both searches on brand name and on the keywords from their second level domain name.
- Some brands such as weir, fage or cern, have optimised search for their brand name, but the keywords chosen for their second level domain are generic and do not give back a very good ranking. usa.fage.
- Brands such as BMW focused on searches for the second level domain name more than on their brand name. next100.bmw is better ranked when searching for next 100 than for bmw.
A number of brand TLD owners have launched their dot brand domain names together with a clear SEO strategy. Their dot brand websites are very well ranked when searching for the brand name of the keyword composing the second level domain name. Certain brands have not optimised their site and therefore have a poor ranking.
The SEO component of a launch of a dot brand domain must be carefully planned and operated. Dot brand site benefit from the presence of a keyword and a brand name in the domain name. This can help the brand go the extra mile, and will be efficient after the site and the content and the other domain factors have been optimised.
Written by Guillaume Pahud, CEO
Follow CircleID on Twitter
"A group of Democratic U.S. senators on Tuesday demanded Yahoo Inc (YHOO.O) to explain why hackers' theft of user information for half a billion accounts two years ago only came to light last week and lambasted its handling of the breach as "unacceptable," reports Dustin Volz from Washington in Reuters. The lawmakers said they were 'disturbed' the 2014 intrusion, disclosed by the company on Thursday, was detected so long after the hack occurred. "This is unacceptable." The senators have asked Yahoo Chief Executive Officer Marissa Mayer for a timeline of the hack, its discovery and how such a large breach went undetected for so long.
Follow CircleID on Twitter
Donald Trump vs Hillary Clinton – First Presidential Debate 2016 / Hofstra University NYThe Internet and tech got very little mention last night during the first of three presidential debates. The only notable exception was cybersecurity where moderator Lester Holt asked: "Our institutions are under cyber attack, and our secrets are being stolen. So my question is, who's behind it? And how do we fight it?" Following are the responses provided to the question by the two candidates:
* * *
Hillary Clinton – Well, I think cyber security, cyber warfare will be one of the biggest challenges facing the next president, because clearly we're facing at this point two different kinds of adversaries. There are the independent hacking groups that do it mostly for commercial reasons to try to steal information that they can use to make money.
But increasingly, we are seeing cyber attacks coming from states, organs of states. The most recent and troubling of these has been Russia. There's no doubt now that Russia has used cyber attacks against all kinds of organizations in our country, and I am deeply concerned about this. I know Donald's very praiseworthy of Vladimir Putin, but Putin is playing a really tough, long game here. And one of the things he's done is to let loose cyber attackers to hack into government files, to hack into personal files, hack into the Democratic National Committee. And we recently have learned that, you know, that this is one of their preferred methods of trying to wreak havoc and collect information. We need to make it very clear — whether it's Russia, China, Iran or anybody else — the United States has much greater capacity. And we are not going to sit idly by and permit state actors to go after our information, our private-sector information or our public-sector information.
And we're going to have to make it clear that we don't want to use the kinds of tools that we have. We don't want to engage in a different kind of warfare. But we will defend the citizens of this country.
And the Russians need to understand that. I think they've been treating it as almost a probing, how far would we go, how much would we do. And that's why I was so — I was so shocked when Donald publicly invited Putin to hack into Americans. That is just unacceptable. It's one of the reasons why 50 national security officials who served in Republican information — in administrations — have said that Donald is unfit to be the commander- in-chief. It's comments like that really worry people who understand the threats that we face.
* * *
Donald Trump – As far as the cyber, I agree to parts of what Secretary Clinton said. We should be better than anybody else, and perhaps we're not. I don't think anybody knows it was Russia that broke into the DNC. She's saying Russia, Russia, Russia, but I don't — maybe it was. I mean, it could be Russia, but it could also be China. It could also be lots of other people. It also could be somebody sitting on their bed that weighs 400 pounds, OK? You don't know who broke in to DNC.
"But what did we learn with DNC? We learned that Bernie Sanders was taken advantage of by your people, by Debbie Wasserman Schultz. Look what happened to her. But Bernie Sanders was taken advantage of. That's what we learned.
Now, whether that was Russia, whether that was China, whether it was another country, we don't know, because the truth is, under President Obama we've lost control of things that we used to have control over.
We came in with the Internet, we came up with the Internet, and I think Secretary Clinton and myself would agree very much, when you look at what ISIS is doing with the Internet, they're beating us at our own game. ISIS.
So we have to get very, very tough on cyber and cyber warfare. It is — it is a huge problem. I have a son. He's 10 years old. He has computers. He is so good with these computers, it's unbelievable. The security aspect of cyber is very, very tough. And maybe it's hardly doable.
But I will say, we are not doing the job we should be doing. But that's true throughout our whole governmental society. We have so many things that we have to do better, Lester, and certainly cyber is one of them.
* * *
Follow CircleID on Twitter
"Preserving a Free and Open Internet," is the title of a post published today by Kent Walker, Google's SVP and General Counsel. He writes in part: "Why the IANA Transition Must Move Forward ... Although this is a change in how one technical function of the Internet is governed, it will give innovators and users a greater role in managing the global Internet. And that's a very good thing. The Internet has been built by — and has thrived because of — the companies, civil society activists, technologists, and selfless users around the world who recognized the Internet's power to transform communities and economies. If we want the Internet to have this life-changing impact on everyone in the world, then we need to make sure that the right people are in a position to drive its future growth. This proposal does just that."
Follow CircleID on Twitter