I have a new book out, Thinking Security: Stopping Next Year's Hackers. There are lots of security books out there today; why did I think another was needed?
Two wellsprings nourished my muse. (The desire for that sort of poetic imagery was not among them.) The first was a deep-rooted dissatisfaction with common security advice. This common "wisdom" — I use the word advisedly — often seemed to be outdated. Yes, it was the distillation of years of conventional wisdom, but that was precisely the problem: the world has changed; the advice hasn't.
Consider, for example, passwords (and that specifically was the other source of my discomfort). We all know what to do: pick strong passwords, don't reuse them, don't write them down, etc. That all seems like very sound advice — but it comes from a 1979 paper by Morris and Thompson. The world was very different then. Many people were still using hard-copy, electromechanical terminals, people had very few logins, and neither defenders nor attackers had much in the way of computational power. None of that is true today. Maybe the advice was still sound, or maybe it wasn't, but very few people seemed to be questioning it. In fact, the requirement was embedded in very static checklists that sites were expected to follow.
Suppose that passwords are in fact terminally insecure. What the alternative? The usual answer is some form of two-factor authentication. Is that secure? Or is two-factor authentication subject to its own problems? If it's secure today, will it remain secure tomorrow? Computer technology is an extremely dynamic field; not only does the technology change, the applications and the threats change as well. Let's put it like this — why should you expect the answers to any of these questions to remain the same?
The only solution, I concluded, was to go back to first principles. What were the fundamental assumptions behind security? It turns out that for passwords, the main reason you need strong passwords is if a site's password database is compromised. In other words, a guessed password is the second failure; if the first could be avoided, the second isn't an issue. But if a site can't protect a password file, can it protect some other sort of authentication database? That doesn't seem likely. What does that mean for the security of other forms of authentication?
Threats also change. 21 years ago, when Bill Cheswick and I wrote Firewalls and Internet Security, no one was sending phishing emails to collect bank account passwords. Of course, there were no online banks then (there was barely a Web), but that's precisely the point. I eventually concluded that threats could be mapped along two axes, how skilled the attacker was and how much your site was being targeted:
Your defenses have to vary. Enterprise-scale firewalls are useful against unskilled joy hackers, they're only a speed bump to intelligence agencies, and targeted attacks are often launched by insiders who are, by definition, on the inside. Special-purpose internal firewalls, though, can be very useful.
All of this and more went into Thinking Security. It's an advanced book, not a collection of checklists. I do give some advice based on today's technologies and threats, but I show what assumptions that advice is based on, and what sorts of changes would lead it to change. I assume you already know what an encryption algorithm is, so I concentrate on what encryption is and isn't good for. The main focus is how to think about the problem. I'm morally certain that right now, someone in Silicon Valley or Tel Aviv or Hyderabad or Beijing or Accra or somewhere is devising something that 10 years from now, we'll find indispensable, but will have as profound an effect on security as today's smartphones have had. (By the way — the iPhone is only about 8 years old, but few people in high-tech can imagine life without it or an Android phone. What's next?) How will we cope?
That's why I wrote this new book. Threats aren't static, so our defenses and our thought processes can't be, either.
Written by Steven Bellovin, Professor of Computer Science at Columbia University
Follow CircleID on Twitter
More under: Security
Imagine living in a country where it was necessary to register with your community government by providing a copy of one of the following:
1. Driver's License
2. Birth Certificate
4. Immigration Card
5. Military identification
6. Any other state, local, national, or international official documents containing a birth date of comparable reliability
This may be necessary in perhaps a large number of nations. However, as a United States citizen and resident, I was quite surprised when my local community issued the request. I investigated and found much to my dismay, that my community in fact was required by regulation to survey its residents on a biennial basis.
HUD 24 CFR Part 100, §100.307 Verification of occupancy, contains the above list of "reliable documents" but lists as well:
(7) A certification in a lease, application, affidavit, or other document signed by any member of the household age 18 or older asserting that at least one person in the unit is 55 years of age or older.
Being of advanced age, I was puzzled. Why would my community request one of a number of breeder documents when a "note from my mother" was sufficient? I went back to the form and realized that item six in the community-supplied list did contain language that permitted me to supply a signed certification. I hadn't read beyond "any other state, local, national, or official document containing a birth date".
I admittedly took poetic license by omitting the full text of item six to reflect my understanding when first attempting to comply with the request. Item six actually reads as follows:
(6) Any other state, local, national, or international official documents containing a birth date of comparable reliability or a Certification signed by any member of the household age 18 or older, asserting that at least one person in the unit is 55 years of age or older.
Having determined that I need not supply a breeder document to my Home Owners Association, I filled out the form, providing the bare minimum of information, and wrote a certification on the reverse complying with the HUD rule. Dropping off my form, I was pleasantly informed that I hadn't supplied a government document. I replied that the certification I had written was sufficient.
Reflecting back on the experience, even in my addled state, I realized that I was perhaps not alone in realizing that a simple statement would suffice to meet the HUD requirement. Others in my community, and throughout the US, might blindly comply with the request to provide any one of a number of breeder documents to their Association.
So I wrote to my Board suggesting that we might want to modify our form to plainly state that a signed certification of age was sufficient. Better yet, write the statement on the form itself with a signature line. I also suggested we might want to establish a policy and culture of minimal collection of Personally Identifiable Information (PII).
My Board Secretary and President helpfully informed me that this was an Association Management issue and that our General Manager would address it. The General Manager dutifully responded assuring me that "all documents collected are ... secured on servers" and that the "risk of not doing so may jeopardize the Association in other ways that may have far greater risk and consequences".
That made me feel better.
Of course I did pause to consider how the risk assessment was done, balancing the need to comply with a regulation against the potential of data breach, identity theft, fraud, fraudulent account creation, account takeover, creation of false passports, and the like. My calculus must be rather different than that used by First Services Residential, the management firm that services my Association.
The risk to individuals presented by storing copies of breeder documents "on servers" is well-known and substantial, as is the risk to the entity maintaining the data. Data breaches, even among "the best” are all-too-frequent. In states like California, breaches require notification and payment for identity protection services and may require forensic evidence to limit notification requirements and payments to only those whose data was exfiltrated. The real costs can be significant and the perception costs even higher.
... and the elderly are frequent targets of and high susceptible to fraud.
Institutional inertia is a powerful force and overcoming it can require Sisyphean effort. In this case, the institutions are the US Government and those that attempt to comply with its myriad rules and regulations. The HUD rule with the "list of seven" became final in 1999, well before data breaches were a serious concern. Hopefully if the regulation were written today, its language would be quite different and might even include an admonition against storing copies of breeder documents (if still listed),
Looking at the regulation, one wonders why it is necessary to repeatedly collect age information. Is HUD concerned that some of us might be getting younger and consequently no longer qualify as "over 55"? Could the goals of the survey and data collection be achieved through some other mechanism? Perhaps simple affidavits with a statistical sampling, either periodically or in case of question would suffice.
No doubt other mechanisms exist to achieve the goal, whatever it might be. Equally certain should be a recognition that minimal data collection by (quasi) governmental entities must be the norm. Requesting and obtaining copies of breeder documents is to my mind a questionable practice. Storing them "on servers", if not air-gapped, makes them accessible to malevolent actors; criminal, terrorist, or governmental.
We can, and must do better. Governments need to review regulations and strike rules requiring excessive information. Businesses must be encouraged to adopt polices and cultures that reduce collection of PII.
Security can enhance privacy, but only so much. Breaches are inevitable. Data will be exfiltrated. But if the data has little value, it becomes of little interest. Minimizing data collection will require institutional change. Effecting that change will require substantial effort. As Security and Privacy experts, we should encourage this change and enlist others in our efforts.
Written by Bill Smith, Sr. Policy Advisor, Technology Evangelist at PayPal
Follow CircleID on Twitter
One of the longstanding goals of network security design is to be able to prove that a system — any system — is secure.
Designers would like to be able to show that a system, properly implemented and operated, meets its objectives for confidentiality, integrity, availability and other attributes against the variety of threats the system may encounter.
A half century into the computing revolution, this goal remains elusive.
One reason for the shortcoming is theoretical: Computer scientists have made limited progress in proving lower bounds for the difficulty of solving the specific mathematical problems underlying most of today's cryptography. Although those problems are widely believed to be hard, there's no assurance that they must be so — and indeed it turns out that some of them may be quite easy to solve given the availability of a full-scale quantum computer.
Another reason is a quite practical one: Even given building blocks that offer a high level of security, designers, as well as implementers, may well put them together in unexpected ways that ultimately undermine the very goals they were supposed to achieve.
Building an Insecure System Out of Perfectly Good Cryptography
Dr. Radia Perlman, a networking and security pioneer, Internet Hall of Fame inductee and EMC Fellow, recently shared her perspectives on the challenges of practical security in a lecture for Verisign Labs' Distinguished Speaker Series. Speaking on the topic, "How to Build an Insecure System out of Perfectly Good Cryptography," Radia began with a simple example based on the famous one-time pad, one of the few known unconditionally secure cryptosystems. In her talk, she showed how two users could each individually encrypt a message securely with a one-time pad — and yet still reveal enough information through the ciphertexts they exchange for an adversary to uncover the message. An insecure system has thus been built out of perfectly good cryptography.
Radia's research career began with a similarly healthy dose of skepticism about a supposed proof about the stability of the ARPANET, the predecessor to today's Internet. Another researcher had published a proof that the ARPANET routing protocols were correct and that the system could not become unstable. Radia offered a counterexample showing that if three particular routing messages were sent, the network would become permanently unstable. The researcher's response: "If you put in bad data, what do you expect?"
Security, Radia observed, is not about what happens in ideal situations, but in the reality of errors and threats.
(I recall a software implementation I made years ago where in an ideal situation all input lengths were within their accepted ranges, I was confident that an encryption algorithm performed correctly. It did not take long for someone to discover a buffer overflow attack. If only I had thought more about the kinds of issues Radia raised at the time!)
Radia's series of vignettes continued with comments on standards development as a series of unacknowledged idea exchanges between competing moving targets, and additional examples of practical security challenges from the history of ITU-T X.509 certificates, Privacy-Enhanced Mail and credential management systems. Her remarks on certificate management echo points Verisign Labs has made about the benefits of publishing certificates as DNS records, and she goes a step further in recommending that trust anchors start at the user's organization, to further reduce the risk of compromise of other points in the system.
Moving to a discussion of user interaction, she pointed out the challenges in "secure" screen savers that sometimes require just a single key to be typed, other times a password, and still other times both a username and a password. Individually, all three are effective — but if you're giving a presentation from your laptop and you've paused long enough for the screen saver to enter the third mode but you think it's in the second, you might find yourself typing your password in the username field for a live demonstration of another case of perfectly good cryptography turned insecure.
The Spectrum of Usability and Security Tradeoffs
With other memorable examples of password rules and security questions, it is easy to understand Radia's conclusion that in the spectrum of security/usability tradeoffs, not only have designers not achieved any point on the optimal balance between the two — the diagonal line in the figure — but hardly any of either dimension.
The classic volume Network Security, which the speaker co-authored, concludes with the observation: "[humans] are sufficiently pervasive that we must design our protocols around their limitations." Networks and applications are built by humans, used by humans, and attacked by humans. If we want a system to be secure, following Radia's wise advice, we need to design it for humans — and protect it against humans as well. That advice will prove to be much more impactful than any mathematical assurance could be.
Written by Burt Kaliski, Chief Technology Officer at Verisign
Follow CircleID on Twitter
More under: Security
If you would like to help guide the future of the Public Interest Registry (PIR), the non-profit operator of the .ORG, .NGO and .ONG domains, the deadline for nominations is MONDAY, NOVEMBER 30, 2015!
More information about the positions and the required qualifications can be found at:
As I noted in an earlier post here on CircleID, there are three open positions on the PIR Board whose terms will begin in mid-2016 and run through mid-2019.
After reading the information about the PIR Board requirements, you are welcome to nominate either yourself or anyone else using the PIR Nomination Form. Nominations close at 23:00 UTC on November 30, 2015, so don't delay!
The Internet Society Board of Trustees will then begin a selection process that will go through March 2016.
In full disclosure, the Internet Society is my employer but I have no direct connection to PIR. I just think CircleID readers would be obvious potential nominees for the PIR Board.
Written by Dan York, Author and Speaker on Internet technologies
Follow CircleID on Twitter
The Internet Society today announced that it has joined more than 200 organizations and individuals who have signed a statement intended for leaders and governments participating in the United Nations General Assembly's 10 Year Review of the World Summit on the Information Society (WSIS+10 Review).
On 15-16 December, government officials from more than 190 counties will meet in New York City for the WSIS+10 Review to assess progress in achieving a people-centered and development-oriented Information Society where everyone can create, access, use and share information. Leaders will discuss a wide range of issues, including the role of governments in Internet oversight, expansion of Internet access, and the impact of Internet technologies in supporting the UN's Sustainable Development Goals.
The discussions at the WSIS+10 Review can influence how the Internet is governed for the next decade and beyond. The 200+ signatories to this Joint Statement urged the UN to safeguard fundamental Internet principles and involve all stakeholders in the WSIS+10 review process. The list of signatories is available and more signatures are welcomed from both organizations and individuals.
Follow CircleID on Twitter
More under: Policy & Regulation
As part of its transparency report, Google says Copyright removal requests continue to be on steady rise and that the company received a new record 65,923,523 requests just last month. This data presents information specified in requests search giant received from copyright owners through its web form to remove search results that link to allegedly infringing content. The following is a graph showing URLs requested to be removed from Google Search per week since July 2011.
Follow CircleID on Twitter
More under: Web
A very Interesting meeting The Internet Governance Forum (IGF) with an ambitious theme of connecting the worlds next billion people to the Internet took place in early November 2015 in a beautiful resort city of João Pessoa in Brazil under the auspice of the United Nations. Few citizens of the world paid attention to it yet the repercussions of the policy issues discussed affect us all.
Each year, there is one topic that takes the world by storm at the IGF. Two years ago, it was surveillance. This year, it was net neutrality. Net neutrality in its most basic form is the ability of Internet service providers to treat all content that pass through their network equally. For example, Vodafone should not give preference to Wikipedia offering it for free, or Airtel should not give preference to Youtube, giving it a fast lane at the expense of other websites. There are many ways to which net neutrality is abused, among them traffic shaping and zero rating.
Zero rating means the end user does not pay for accessing a certain service, but the websites the end user can access are limited. For example, the user will only have free access to Facebook, or Wikipedia, and nothing else. The content the user can access is determined by those with financial power. And there lies the problem, limited access for the end user. You see, the Internet is a public good, an engine for economic growth and development. The utilitarian approach is therefore to ensure as many people as possible have access to the Internet for a nation to attain its economic potential.
At the IGF, researchers took sides on zero rating depending on their interests. A research in Asia revealed that zero rated services were an entry point for people who had no access to Internet, and those who used zero rated services converted to paid users after a while. Another research showed that people don't use the Internet not because of the cost or availability, but because they don't need it. Weird conclusion I can say. An interesting fact is; in communities where zero rated services were the norm, the users did not know the difference between the Internet and Facebook. That is a major problem if you ask me. Another research by Mozilla Foundation dubbed equal rating found that when users are given Internet bundles, they accessed diverse types of websites, not just one single website. But the big question was who funded these types of research? For example, Facebook was accused of flying powerful Cabinet Ministers from developing countries to expensive resorts in California to influence them allow zero rated service in their countries. It therefore goes that we need proper research on the long term economic implications of zero rating.
We should say no to zero rating because it leads to monopolistic behaviours, anti-competitiveness, and customer lock-in. Zero rating gives a false Internet because it removes incentives for giving the underserved regions a proper Internet. Remember the definition of Internet is a global system of interconnected computer networks, not just a single website. Companies running zero rated services are crafty and just want to add up number of users to their platforms to increase their advertisement revenue streams, therefore increase their companies' valuation and appease their shareholders. Zero rating stifles innovation because innovators are not able to penetrate the market where market leaders with tonnes of money have directed all the users to their own services.
Zero rating is here with us. In Kenya and India, Airtel partnered with Facebook to offer Free Basics, a service that allows users to only access specified websites. There are variations of the same in most parts of the developing world. Some countries too have weighed the pros and cons and outlawed zero rating; Chile, Norway, Finland, Netherlands, Estonia, Iceland, Finland, Latvia, Malta, Japan, and Lithuania among others.
The governments in my part of the world have not taken any steps to protect the users, and innovators among us from such demeaning service. What is more annoying is governments' inaction to formulate proper ICT policies that move with the rapid changing times. Some countries in the developing world do not have ICT policies, and those that have, have out-dated policies developed more than a decade ago. That is not how to participate in the knowledge economy. It is sad to have governments with pools of policy expert, who cannot formulate proper policies for the masses. Isn't it Plato who said, "We can easily forgive a child who is afraid of the dark; the real tragedy of life is when men are afraid of the light!"
All that notwithstanding, the communities that are affected should pay keen interests to the following points:
- Zero rating is not tolerated in progressive countries with strong policies. Ask yourself why.
- Without policy on Net Neutrality, regulators have nothing to enforce thus leaving market players to their own devices, and anti-competitive behaviour.
- The communities, in an all-inclusive manner should develop Net Neutrality policies in their respective regions.
- Most regulators and government agencies are usually given targets to ensure universal coverage of communication services. They are very happy to maintain the status quo since they will report zero rated services as a metric of increased Internet access. This will be a big lie because they will have denied the rural folks access to the Internet. We all know one website is not the Internet. The best practice is to have the regulator pressure telcos increase rollout in under-served regions as part of their Universal Service obligations.
- Zero rating infringes on fundamental human rights by denying users access to the Internet. It may be a conspiracy to keep developing countries in the darkness of the information age.
The case for affordable internet for everyone.
Advocates for universal broadband access have for a long time urged communities to advocate for universal coverage, better utilisation of Universal Service Fund, telecommunication infrastructure sharing, increased road coverage, accessible wayleaves and cable ducts, affordable energy, fair and competitive licensing regimes, and local content and hosting. All these are policy options that will ensure the COST of broadband internet comes down to a level where every citizen in the developing world can afford. Is it possible? The good news is technology to provide universal broadband is here with us. According to an article that appeared in Kenya's Daily Nation, a pilot project in Nairobi's poor neighborhoods showed that it was possible to realise a cost of less than $0.1 for a 3000mbs of broadband bundle. As you can see, with simple solutions, affordable broadband will be the new normal. These simple steps are the game changers.
As Vyria Paselk, Director of Internet Leadership at Internet Society put it, "if your country does not have access to the Internet, then you are not participating in the internet economy". And isn't the entire world now an Internet economy?
Written by Mwendwa Kivuva, Networking and Security Expert
Follow CircleID on Twitter
In 1905, philosopher George Santayana famously noted, "Those who cannot remember the past are condemned to repeat it." When past attempts to resolve a challenge have failed, it makes sense to consider different approaches even if they seem controversial or otherwise at odds with maintaining the status quo. Such is the case with the opportunity to make real progress in addressing the many functional issues associated with WHOIS. We need to think differently.
Over the last several years a large number of people have worked diligently to explore real alternatives in both technology and policy. On the technology front, the Internet Engineering Task Force (IETF) published a series of RFC documents in March 2015 that specify the Registration Data Access Protocol (RDAP). In 2013, ICANN formed an Expert Working Group (EWG) on generic Top-Level Domain (gTLD) Directory Services. This group produced its final report in June 2014. Both of these efforts were focused on finding new ways to provide registration data directory services by replacing WHOIS.
Fast forward to October 2015 and ICANN-54. On Wednesday, Oct. 21, a session was held to discuss an ICANN proposal for an RDAP implementation profile for use by generic gTLD registries and registrars. During the session (at approximately 38:16 of the audio transcript) an ICANN staff member described a number of steps that are needed to provide "complete functionality equivalence with WHOIS." What is the benefit of replacing WHOIS with something that is functionally equivalent — and thus functionally deficient? RFC 7482 describes the following WHOIS protocol deficiencies:
- Lack of standardized command structures
- Lack of standardized output and error structures
- Lack of support for internationalization and localization
- Lack of support for user identification, authentication and access control.
The EWG on gTLD Directory Services was formed in February 2013 to "define the purpose of collecting and maintaining gTLD registration data, and consider how to safeguard the data" and to "provide a proposed model for managing gTLD directory services that addresses related data accuracy and access issues, while taking into account safeguards for protecting data." The group's final report recommended that "a new approach be taken for registration data access, abandoning entirely anonymous access by everyone to everything in favor of a new paradigm that combines public access to some data with gated access to other data."
RDAP was designed to address the WHOIS deficiencies and the EWG recommendations, but the proposed profile only provides the benefits of standardized command, output and error structures. The profile does not address internationalization and localization of contact information. The profile also does not include support for RDAP's user identification, authentication and access control features. These features are needed to provide data privacy by restricting data access to appropriately authorized users. As currently written, the profile continues the practice of exposing personally identifiable information to anyone who asks. With these much-needed features excluded it would be more reasonable to defer implementation until we have clear consensus on the associated policies. No work will have to be undone in the future if we need to develop additional protocol specifications and add features later.
Our Opportunity to Address the Issues
This issue — the plan to not include support for RDAP's internationalization and data privacy supporting features — is where the profile is setting our industry up for failure. An RDAP implementation that fails to address the most significant issues with WHOIS turns unsolved WHOIS problems into unsolved RDAP problems, and the history of failure to resolve WHOIS deficiencies repeats itself. I've authored an Internet-Draft document that describes one way to address the data privacy problem. There are almost certainly other approaches worth considering. It will take time to consider our options and think through the policy implications associated with data privacy, but that would be time well spent given the evolving nature of data privacy laws and practices in the different legal jurisdictions where gTLD registries and registrars do business. The risk of conflict with these laws and practices needs to be considered to ensure that RDAP implementation, deployment and operation remains a commercially reasonable undertaking.
The profile notes that additional protocol specifications are needed to map Extensible Provisioning Protocol (EPP) domain status codes to RDAP status codes, extend RDAP search capabilities and extend RDAP to include events that describe the registrar expiration date and the date of the most recent database update. The proposed profile implementation schedule includes milestones for the availability of these specifications as RFCs in 2016, but as of today only the domain status mappings are described in an Internet-Draft. If we're going to take the time to develop these features, why should we not take the time to address the internationalization/localization and data privacy features as well? Without these features RDAP produces little more than a JSON-encoding of today's WHOIS data.
I'm also concerned about the approach being taken to develop the profile itself. The IETF has a long tradition of documenting protocol implementation profiles using the Internet-Draft and Informational RFC publication process. Here are a few recent examples:
- Adobe's RTMFP Profile for Flash Communication (RFC 7425)
- Suite B Profile for Transport Layer Security (TLS) (RFC 6460)
- Suite B Profile of Certificate Management over CMS (RFC 6403)
The registry industry used the IETF process to develop the RDAP protocol specifications. We should use the same IETF process to document an RDAP implementation profile.
With RDAP we have a historic opportunity to address the most pressing WHOIS deficiencies. If we fail to take advantage of this opportunity we run the risk of RDAP becoming yet another failed attempt to replace WHOIS. ICANN will open a public comment period for their implementation profile proposal within the next few weeks. Be sure to read the proposal and share your opinions. It's time to take a different approach.
Written by Scott Hollenbeck, Senior Director of The Hive at Verisign
Follow CircleID on Twitter
The RIPE 71 meeting took place in Bucharest, Romania in November. Here are my impressions from a number of the sessions I attended that I thought were of interest. It was a relatively packed meeting held over 5 days. So this is by no means all that was presented through the week. More can be found at the RIPE 71 meeting website.
A presentation on Mobile Satellite services looked at the Inmarsat IP environment. This service uses a constellation of spacecraft parked in geostationary orbit, which means that any signal that is bounced off these spacecraft has to travel a minimum of 35,786km up, and the same back down. Of course if you aren't directly below the spacecraft it will be a longer trip. If you plug in the speed of light in a vacuum of some 299,792,458 m/s, and factor in the earth's roughly spherical geometry, then its typically between one quarter to one third of a second to get a signal up to the spacecraft and back to earth. It also costs significant sums to build and launch these spacecraft into orbit, and they have a limited life due to the need to use thruster fuel to stop the oscillation induced by the moon. And of course there is the issue of spectrum. Ku band systems operate in the 12-18Ghz band, which offers a reasonable compromise between power, bandwidth and rain attenuation. Little wonder that such services are expensive and limited in their capability. That it works at all is a triumph of engineering. Inmarsat currently provides a broadband data carriage service, which can operate at speeds of up to 492Kbps. Issues with an IP service over this medium include issues of reliability, unwanted traffic and firewall setup and infected guest systems. The challenge of providing IP services in that of creating a robust unmanaged IP system that is fully functional, yet capable of defending itself not only from the toxic public Internet, but also capable of defending itself from infected connected user devices! The speed of light isn't changing anytime soon, nor is the mass and rotational velocity of the earth (or at least what's what we hope!) so geostationary satellite systems will always have delay to factor in. Low Earth Orbit (LEO) constellations can counter this, but these LEO spacecraft are moving relative to a fixed point on the earth so continuous service requires a larger constellation of spacecraft to offer the same global service coverage as 4 or 5 geostationary spacecraft. Inmarsat is responding by launching its I-5 spacecraft, each of which use 89 small Ka band transponders (at 27-31Ghz), which can provide a digital carrier of 60Mhz. Inmarsat are looking at the airline industry as a major customer for this high capacity IP service.
On a completely different note, there was a presentation on Automated Certificate Management. For many years, server security has been positioned as a luxury good. While a domain name might cost a few dollars a year, a security certificate can cost tens or even hundreds of times that cost. In a world of pervasive surveillance and toxic attacks by highly capable agencies, there has been a pushback to provide decent security services as a commodity good. One such project is the combination of ACME and Let's Encrypt. The problem that ACME is addressing is that not only is certificate issuance expensive, but its also needlessly complex and most potential users are deterred from even applying for a domain name certificate. The ACME work builds on the REST API framework to generate Certificate Signing Requests for EV domain name certificates that can be used by any CA to provide a largely automated service interface for certificate maintenance. ACME uses a proof of possession test by having the applicant place a named token on the domain name's website. Once the applicant passes the proof of possession test, the applicant can then generate certificate signing requests. The Let's Encrypt is a Certification Authority that will offer free certificates using this REST API interface. A Public Beta service for Let's Encrypt opens on the 3rd December and there are already a number of open source efforts (such as the rather neat hack to automate certificate issuance) that are intended to make use of this interface to provide ready-to-use user tools.
The lightning talk on traffic dependences between IXPs showed that incidents at one major exchange point will have cross impacts on neighbouring exchanges. Traffic paths on the Internet are very often asymmetric, and altering the flows through one path will create impacts on other paths. I was amused to see the presenter advocate the widespread adoption of path symmetry as a possible response. I thought that the whole idea of packet switching was to improve the efficiency of networks by removing the overheads of maintaining virtual circuits across the network!
There was an interesting report on BGP hijacks. In this experiment they deliberately announced a 'borrowed" route on exchanges, targeting the route announcement at each exchange point in turn, and counted the number of peers at these large exchanges that picked up and learned the route without any reference to a route registry entry of any other form of pre-provisioning. The experiment used simple pings to the exchange neighbour with a "borrowed" address as the source of the ping. In some ways, this experiment just confirms what we see each and every day with routing leaks: few folk filter and stuff just permeates through a loose fabric of mutual trust. Little wonder that abuses are so common!
The presentation on high speed packet capture was somewhat esoteric. Their goal was to perform packet capture at rates up to 15M packets per second. This gives a time budget of 67 nanoseconds per packet, which is beyond the capabilities of most systems. In this case they used Intel's Data Plane Development Kit (DPDK), a library that permits the network interface to DMA directly into memory. The approach is to use an Openflow switch to create a set of segmented packet streams and perform packet capture on each stream and then reunite the packet logs offline. This is a classic application of the scaling technique of splitting a hard serial problem into a number of parallel smaller and tractable serial problems.
There is a continued interest in exploring remote-triggered black holes as a means of pushing the route filter rules for DDOS mitigation closer to the sources of the DOS traffic. The presentation on fastmon was on the combination of sflow traffic monitors with thresholds and the translation of over threshold traffic into a BGP flowspec for remote black holing.
There has been considerable interest in recent times in mapping and monitoring the network. Of interest these days is the challenge of mapping the instance of anycast services, such as the location of Google's public DNS servers, or the location of instances of Cloudflare's points of presence. Better overall Internet performance in terms of the user experience is all about the combination of adequate capacity and reducing delay. Understanding the relative location of users and the content that they are attempting to access, and the networks paths that lie between these two points can lead to better performing networks. One presentation was concerned with geo-locating anycast service instances, using distributed traceroute measurements and probabilistic determination of location using speed of signal propagation times. The second concerns a new program to monitor mobile networks by using in-band active measurement (Monroe).
It's good to see the network management story finally improving. For decades the state of the art was SNMP and Expect scripts driving the equipment's CLI. If you were really sophisticated you also ran Rancid to detect config changes to the production network components. But that was where it sat. And it's not very good. The presentation by Facebook is illustrative of a number of large scale provider's efforts to improve upon this story. They have tried to use a suite of conventional artificial intelligence techniques to detect patterns in reported network events and to associate remediation actions to these events, with considerable success evidently. The underlying tool set now includes Git to support shared code and versioning, and tools such as Ansible, Puppet, Chef and Salt to manage the various configurations and their dissemination. Behind all of this is a long anticipated away from ASN.1 as the lingua franca of network management and in its place its now JSON or YAML. It's a long anticipated and welcome move in the area of network management.
The weekend prior to the RIPE meeting, there was a two day "hackathon." Out of this came a rather neat piece of code, arising from a hack team lead by Martin Levy of Cloudflare, "ASNtryst”. The approach is delightfully simple: take a set of hop-by-hop traceroute records and locate in the traceroute those steps where the originating AS changes, and geolocate these points of AS exchange. By analysing enough of these traceroutes, the result is a surprisingly good map of where networks interconnect.
As is usual for RIPE meetings, it was a well organised, informative and fun meeting to attend in every respect! If you are near Copenhagen in late May next year I'd certainly say that it would be a week well spent.
Written by Geoff Huston, Author & Chief Scientist at APNIC
Follow CircleID on Twitter
According to data from the FttH Council, the number of homes passed with fibre in the US increased 13% in 2015, year-on-year, to 26 million. Combined with Canada and Mexico, the number of passed homes has reached 34 million. The take-up rate is excellent by international standards, at more than 50%. Commonly operators look to about 20% to 30% take-up before work can begin on new fibre infrastructure to communities. Yet once the cable is in place a greater proportion of people tend to sign up to services when the improved experience of fibre, against DSL and cable alternatives, is understood and broadcast to other potential customers, often by word of mouth.
The Council has suggested that some 1,000 FttP providers in North America expect to offer a 1Gb/s by 2020.
Certainly the Google factor is important in this market. The company has set in train a number of projects with municipalities to develop a 1Gb/s fibre networks that would connect hundreds of thousands of people in a number of areas across the country. The first Google Fiber city was Kansa City, the second Austin, Texas, announced in 2013.
During 2015 Google has worked on expanding Google Fiber to Atlanta, Nashville, Salt Lake City, Phoenix, Portland, San Antonio, San Jose and the North Carolina towns of Charlotte and Raleigh-Durham. Also benefiting will be more than a dozen smaller towns in the surrounding vicinity of these conurbations. In July this year Google Fiber received a license from the Texas Public Utility Commission (PUC) to operate as an ISP in San Antonio. Here, the company aims to lay at over 4,000 miles of fibre cabling throughout the city. At the same time it also received a licence from the city council of Tempe, Arizona, to build a similar network.
Each time Google becomes involved in telecommunications, it gets international media coverage, and each time it stimulates a familiar response from telcos. These operators had for long been content to provide customers with services which they deemed adequate, but which customers themselves by and large considered inadequate. Operators could coast along knowing that they had their customers over a barrel, because there was no effective competition in their licensed areas.
Google changed this by encouraging municipal involvement in broadband infrastructure. Now there is growing pressure from municipalities across the country to override restrictive State-level laws which prevent them becoming involved in telecom services. These communities often club together, as happened recently in Colorado where 26 cities and towns and 17 counties — 43 communities — all voted overwhelmingly to give themselves authority for the provision of telecom services. In Colorado the measures reflected years of frustration with the poor services offered by the incumbent telcos.
In general, the telcos' response has been in two forms. On the negative side they have used their lobbying strength to curtail municipal involvement. This looks to be a losing battle. On the positive side they have strengthened their own investment in extending gigabyte services. This really reflects the recognition that they must swim with the tide rather than be borne down by it.
So now we are seeing the Verizons and AT&Ts of the market adding cities to their gigabyte footprints at a considerable rate, while smaller operators are doing likewise. As an example, the regional cableco Cable ONE will extend its gigabyte service across more than 200 towns during 2016, making it available to the majority of its customers by the end of the year. This work is an extension to the residential sector of the 1Gb/s service already offered to business customers, and is a response to customer demand as much as to the general direction that the sector is taking.
Written by Paul Budde, Managing Director of Paul Budde Communication
Follow CircleID on Twitter
министър-председател на Израел, 14.11.2015 г.
Тероризмът в Европа не започва, нито (уви…) ще свърши с атаките в Париж.
Но войнстващият ислямски тероризъм е различно явление от тероризма на организации като “Червените бригади” или фракция “Червена армия” или дори от тероризма на бойните групи на БКП през 1941-1944 г.
Настоящата статия има за цел да напомни на читателите за някои прояви на този войнстващ тероризъм в Европа, за да се разбере по-добре, че това, което виждаме в последните дни е закономерно и че реакцията на европейските политици през последните години на подобни атаки е част от проблема.
Тероризмът срещу евреите в Европа – вкл. и (особено?) във Франция е явление, което не започна през януари т.г., а много по-отдавна. Впрочем, почти незабелязано мина новината, че в Марсилия беше нападнат евреин-учител (sic!) на 19-и ноември. Наръган от трима души, фенове на т.нар. ислямска държава. Нападателите, разбира се, са избягали – както правят всички подлеци и страхливци. Да се чуди човек защо всички тези терористи, които толкова смело убиват хора, се страхуват да се изправят срещу правосъдието на същия либерален запад, срещу който се борят.
Да цитирам нещо и от немския “Шпигел” от 2012 г.:
“Все повече френски евреи си купуват жилища в Израел, подплашени от засилващия се антисемитизъм във Франция. Много от тях се оплакват, че са тормозени на публични места. За тях страната вече не е безопасно място, на което могат да отгледат децата си. След убийствата в Тулуза емиграционната вълна вероятно ще се засили, разказва германското списание “Шпигел”.
Ако сте забравили – тогава стреляха по еврейско училище и убиха предимно деца.
Ето ви малко подробности за цялостното настроение във Франция преди три години:
“Френската еврейска общност документира 90 антисемитски инцидента в първите десет дни след атаката в Тулуза. Службата за защита на евреите е записала 148 антисемитски инцидента през март и април, като 43 от тях са класифицирани като насилствени… еврейските гробища в Ница са поругани… имейли и телефонни обаждания със заплахи са регистрирани в различни [еврейски] училища в страната… еврейски деца са пребивани.”
Отговорът на тези атаки от страна на властите? Да ги обявят за “недопустимо насилие”!
Но не е само Франция, разбира се. Данните от цяла Европа са категорични – антисемитизмът продължава да се разпространява като чума по целия континент. При това се забелязва връзка между настроенията против политиката на Израел и антисемитизма. В цитирания доклад на Държавния департамент се виждат и категоричните числа: над два пъти увеличение на проявите на антисемитизъм във Франция и повече от два пъти увеличаване на постоянно емигриралите от Франция в Израел евреи – 3293 френски евреи са се преселили през 2013 г., докато през миналата година хората са вече 7231. Описват се и случаи на антисемитизъм в Германия, Швеция, Норвегия и т.н., което също е тъжен факт.
Къде е България?
Сигурно някои си мислят, че антисемитизъм у нас няма – така, както няма и тероризъм? Бъркат – и в двата случая. И антисемитизъм има, и тероризъм има. Но има и липса на разбиране за връзката между двете, както и за това, че независимо дали България ще се възприема като член на ЕС/НАТО или най-близкия другар на Русия, това няма никакво значение за терористите. Войнстващите ислямски терористи не правят разлика не само между държавите, но и между хората, които убиват. Факт е, че понякога си правят труда да не убият всички, но в такива атентати като този в Сарафово или в Париж, те не се колебаят да посегнат на живота на всеки – без разлика в пола, расата, религията и т.н.
Този тероризъм е зло, което иска да унищожи нашата цивилизация.
И България също.
Защото нашият начин на живот – колкото и да ни се струва странно! – не се харесва на терористите. Казвам “колкото и да ни се струва странно”, защото мнозина българи твърдят, че животът ни не е добър, че сме най-нещастни в света и т.н.
Терористите не се трогват от това, че на нашенци и без тероризъм им е зле.
Както ни удариха в Сарафово, така могат да се опитат да ни ударят отново.
Не се притеснявайте, че казвам на глас нещата, които всеки специалист по национална сигурност би трябвало да си го мисли и да ги пише за висшето ръководство на страната; моите думи не са предизвикателство към съдбата, а най-обикновено споделяне на факти с читателите ми.
Това, което ние си пишем и си говорим няма да ги ядоса допълнително. Те, терористите, отдавна са ядосани – не на нас, а по принцип: на Запада, а и на Изтока, на САЩ, ЕС, Китай, Русия и т.н. Когато човек е ядосан и мрази, не е много трудно да направи и следващата крачка – да се хване за оръжието.
Колкото и да се снишаваме и да си мълчим, няма да ни се размине, ако решат да се занимават (отново) с нас – както не ни се размина в Бургас.
Но има елементарни неща, които държавата и гражданите трябва да направят. Трябва спешно да организираме нашите служби за сигурност, да охраняваме по-добре границите си, да поискаме държавата да сложи край на трафика на хора от Близкия Изток, Афганистан и къде ли не през територията на отечеството ни.
Държавата трябва да инвестира веднага в сили и средства, които ще осигурят по-високо ниво на безопасност за всички граждани.
Пиша това с ясното съзнание, че има хора, за които подобни думи звучат грозно и лошо; хора, които ще посочат (с право!) бедните ни родители, дядовци и баби, и ще кажат, че е по-важно да имат по-високи пенсии, отколкото да се харчат пари за техника, коли, самолети и кораби, които няма да ни спасят, ако някой ни е вдигнал мерника. Не сте прави. Това, което трябва да се свърши е не само купуването на техника, но и обучението на хора, набирането на служители за работа в МВР и ДАНС – хора, които да знаят чужди езици, да познават културите на страните от Близкия Изток и т.н., и т.н.
Това е част от солидарността ни да бъдем в НАТО и ЕС. Това е част от задължението ни да бъдем хора.
Вече осма година ние приемаме членството в ЕС така, както сме искали да приемем свободата, по думите на Левски – “на тепсия, вкъщи”.
За средния нашенец членството в ЕС е преди всичко “пари от фондовете”, с които се строят магистрали.
А важното не са парите. Важното е, че сме част от една голяма Европа, а и от НАТО. И не ставайте жертва на евтината пропаганда, изразена с думите: “Е, кой пък толкова много ще ни напада, че да ни трябва това НАТО”.
Както се вижда в последните години, няма нито една страна, която да е имунизирана срещу болестта на тероризма.
Силата на ЕС и НАТО е толкова голяма, колкото е най-слабото им звено. Не, че искам да ви казвам нещо, но вие вероятно сте се усетили защо имаме късмета да сме тъкмо сред заподозрените за тази слабост. И не, не е само идиотското изказване на руския посланик в ЕС Чижов, че България ще бъде “своеобразен троянски кон в ЕС, разбира се, извън негативния оригинален смисъл на тази метафора”.
По-скоро е заради това, че продължаваме да живеем на принципа “точно на нас пък няма да ни се случи“. Принцип, който се доказва непрекъснато за неприложим спрямо България. Не само, че ни се случва точно на нас, но и ни се случва нееднократно. И все е лошо.
Носене на конец на ръката, баене срещу уроки, леене на куршуми против страх и др.п. бабини деветини не са правилното решение за проблема с тероризма в XXI век. Те само носят лъжливото усещане сред хората, че са в безопасност, докато опасността само ще се е увеличила.
Бог да пази България и българите!
Internet public policy — and the technical ecosystem — is at a crossroads and the choice of CEO that ICANN's board makes now is probably the most important such choice it has ever made. Since I work in Internet policy across the Geneva institutions where more than 50% of all international Internet-related policy meetings take place, and have worked at ICANN in senior positions in the past, I thought I would suggest some qualities the next CEO should have. I would like to be clear that everything that follows should not be read as a comment on the current or any previous CEO; it is a fantastically difficult and almost entirely thankless job no matter who does it.
Firstly, and most importantly, it must be someone who has demonstrated that his vision for ICANN will be implementing the vision of the community, rather than his or her own. Everyone who applies for the job will 'talk the talk' but unless they have demonstrated a history of 'walking the walk', they shouldn't get the nod. I have known and worked with many heads of international organisations both intergovernmental and non-governmental and the most successful have been those who stuck to ensuring that the organisation was well run, kept very clearly within its mandate, and was demonstrably focused on implementing the views of their stakeholders instead of 'free-lancing' or 'activism'. Corporate executives simply don't have this mindset; they're chosen because they articulate a persuasive vision for where the organization they lead should go in order to be profitable. ICANN gets constant criticism for appearing too focused on monetization of domain names, and choosing a corporate-style CEO is only likely to make that worse. Profit shouldn't be the ICANN CEO's focus: that's the domain market's job, and the search for profit incentivises growing ICANN itself — again, the wrong objective.
Second, he or she should focus on ensuring ICANN delivers value for money in implementing the community's decisions efficiently and transparently and ensuring that ICANN sticks very clearly within its mandate. For example, time and energy spent in helping developing countries implement DNSSEC and other security related standards integrally related with names and numbers is clearly within ICANN's mandate, as is work to prevent the misuse of those identifiers by fraudsters and organized criminals. ICANN has first-rate people in this area like David Conrad and Dave Piscitello to name just two. Prioritizing issues like this would do far more to increase trust online — and in ICANN - than launching projects like the NetMundial Initiative no matter how worthwhile. The measure of whether to get involved in an external initiative ought to be: will ICANN have to work to explain how the activity relates to its mission, or will the relationship be clearly obvious? Anything that doesn't fall within the latter category should be avoided — or at a minimum the ICANN community consulted in advance.
Thirdly, the Board should pick someone who is genuinely not interested in spending much time attending international meetings or flying to meet government leaders not directly related to that countries' GAC membership. That's what external relations staff are for. My years in international public policy tell me that meetings requiring a CEO in person are really few and very far between. The time ICANN's CEO doesn't spend on activities like these could be spent with the community. They're a diverse, demanding, interesting group: time spent listening to and learning from them will be time well-spent.
I think if the Board sticks closely to these criteria, it has the best chance of a successful outcome — both for the new CEO and for ICANN.
Regarding the process itself, this should be the last selection that the Board makes entirely by itself. While a (mostly) transparent process that UN agencies use isn't necessarily in my view — aside from anything else, it overly-politicises the choice of leader instead of focusing on qualifications of candidates — ICANN can do better than the current model. Next time, it should.
Full disclosure: I applied for the CEO post but was not shortlisted.
Written by Nick Ashton-Hart, Associate Fellow, Geneva Centre for Security Policy
Follow CircleID on Twitter
The Domain Name System (DNS) offers ways to significantly strengthen the security of Internet applications via a new protocol called the DNS-based Authentication of Named Entities (DANE). One problem it helps to solve is how to easily find keys for end users and systems in a secure and scalable manner. It can also help to address well-known vulnerabilities in the public Certification Authority (CA) model. Applications today need to trust a large number of global CAs. There are no scoping or naming constraints for these CAs — each one can issue certificates for any server or client on the Internet, so the weakest CA can compromise the security of the whole system. As described later in this article, DANE can address this vulnerability.
Dane is Built on DNSSEC
DANE is built on the foundation provided by the DNS Security Extensions (DNSSEC). DNSSEC is a cryptographic system to verify the authenticity of data in the DNS. Domain owners digitally sign data in their DNS zones, and DNS resolvers authenticate these signatures as they lookup DNS records. This provides protection against well-known attacks, such as DNS cache poisoning and DNS spoofing.
Validating with DNSSEC
In effect, DNSSEC transforms the DNS into an authenticated directory of information associated with domain names, and as a result some natural follow-on benefits appear. DNSSEC can be used to securely store and retrieve cryptographic keying material, such as public keys, X.509 certificates, etc. in the DNS. These can in turn be used to significantly strengthen the security of Internet applications, and address a variety of vulnerabilities that exist in today's deployed systems.
Security for TLS Using DANE
The "TLSA" DNS record type defined in the DANE protocol describes how to associate Transport Layer Security (TLS) certificates with the domain names of servers. These can then be used to secure TLS applications, such as Web (HTTPS), email transport (Simple Mail Transport Protocol (SMTP) over TLS), instant messaging (XMPP over TLS) and many more.
Security for TLS using DANE
SMTP over TLS is one application where DANE is seeing growing production scale deployment on email servers with large numbers of users. The appearance of DANE for SMTP transport security is particularly timely. SMTP over TLS has traditionally been used in an opportunistic manner — it is used only if both sides of the SMTP connection support it. However, a man-in-the-middle attacker can easily subvert the security by stripping away the TLS capability indication and downgrade the connection to be unencrypted. With DANE, SMTP servers use the presence of a signed TLSA record in the DNS to (a) confirm the intent to secure the session with TLS, preventing downgrade attacks, and (b) authenticate the connection with DANE.
Additional DANE record types are currently in development to accommodate more applications.
Security for Email Using DANE
The upcoming OPENPGPKEY and SMIMEA records will allow use of the DNS to store and retrieve PGP (Pretty Good Privacy) public keys and S/MIME certificates for end users. PGP and S/MIME are commonly used for secure end-to-end messaging (i.e. encryption and digital signing). DANE provides a new way to authenticate these keys and certificates in addition to or in place of the current ways that users do this. In addition the DNS provides an always available, globally distributed mechanism to find these keys, solving a crucial problem of easily locating keys for inter-organizational email. The end-to-end messaging scenario is discussed in detail in a recent Verisign blog post.
Security for email using DANE
Emerging projects, such as the US National Cybersecurity Center of Excellence's (NCCoE) Secure Email initiative, are already exploring ways to use such mechanisms. With the advent of DNSSEC and DANE, it is now possible to deploy inter-organizational secure email in a truly scalable and manageable way.
More Security Use Cases For DANE
The proposed Payment Association (PMTA) record associates payment information (such as account numbers, Bitcoin wallets and other forms of electronic currency) with easier to use domain names typically corresponding to users. Companies like Armory and Netki are already integrating DANE PMTA support in their Bitcoin wallet implementations.
There is a proposal to enhance the TLSA record to allow the use of TLS client certificates. This fills a gap in the current specification, which only works with TLS server certificates. With this enhancement, many applications that employ client certificates will be able to use DANE to authenticate them. In particular, some design patterns from the Internet of things are already planning to use this mechanism, where large networks of physical objects identified by domain names may authenticate themselves using TLS to centralized device management and control platforms.
Another proposal in progress involves a DANE and DNSSEC authentication chain extension for the TLS protocol. This mechanism allows a TLS server, when prompted by a compatible client, to deliver the TLSA record corresponding to its server certificate along with the complete chain of DNSSEC records needed to authenticate it. The TLS client gains a performance advantage by not needing to do all these DNS queries itself. It can also help in situations where the client finds itself behind a middlebox that impedes its ability to successfully issue DANE- and DNSSEC-enabled queries. These things are important preconditions for applications like Web browsers and Web servers to adopt DANE.
What is takes for DANE to work
In short, DANE provides the ability to use DNSSEC to perform the critically important function of secure key learning and verification. It can use the DNS directly to distribute and authenticate certificates and keys for endpoints. It can also work in conjunction with today's public CA system by applying additional constraints about which CAs are authorized to issue certificates for specific services or users — thereby significantly reducing risks in the currently deployed CA system. A recent paper from Verisign Labs explores this topic in more detail.
For more information, visit the Verisign Labs page on DANE.
Written by Shumon Huque, Principal Research Scientist at Verisign Labs
Follow CircleID on Twitter
When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.
Independent of these protection technologies (or possibly because of them), we've also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it's almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn't be visited — because not even the police or military have been effective there.
Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they're irrelevant — as, even after following them to the letter, the user can still fall victim.
The fact that a user can't click on whatever they want, browse wherever they need to, and open what they've received, should be interpreted as a mile-high flashing neon sign saying "infosec has failed and continues to fail" (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we're ever to make progress against the threat and reach the utopia of users being able to "carelessly" using the Internet, those operating systems must get substantially better.
In recent years, great progress has been made in the OS front — primarily smartphone OS's. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC's, notebooks, or servers at home or work. There's a bunch of reasons why of course — and I'll not get in to that here — but there's still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual — way before the Internet, and way before computers — there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.
Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever — commonly known as childbed fever. He studied two medical wards in the hospital — one staffed by all male doctors and medical students, and the other by female midwifes — and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.
Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference — ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that "cadaverous particles" (this is a period of time before germs were known) were being spread to those birthing mothers.
Dr. Semmulweis' medical innovation? "Wash your hands!" The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.
Now, if you're in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800's) carbolic acid, you'll note that it isn't so good for your skin or hands.
In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses — and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn't appear until 1964.
What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. "wash your hands") or a disposable environment that sits on top of the core OS and authorized application base (e.g. "disposable gloves"), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they're using after they've finished their particular actions.
This obviously isn't a solution for every class of cyber threat out there, but it's an 80% solution — just as washing your hands and wearing disposable gloves as a triage nurse isn't going to protect you (or your patient) from every post-surgery ailment.
Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game — altering the battle field for attackers and the tools of their trade.
Exciting times ahead.
Written by Gunter Ollmann, CTO at NCC Group Domain Services
Follow CircleID on Twitter
More under: Security
Писателят Димо Райков* е публикувал откъс от книгата си “Диагноза: Българин в чужбина”. Книгата можете да купите в сайта на издателството (сайта на издателството) или тук, а обяснение защо я е написал да прочетете в “е-Вестник”.
Димо Райков ми разреши да препечатам от неговата страница във Фейсбук следния откъс, посветен на баща му:
В памет на татко
(Откъс от книгата “Диагноза:Българин в чужбина”, която татко държеше под възглавницата си, заедно с останалите ми “парижки” книги и с блестящата статуетка на Айфеловата кула… Прости, тате, че не успях да те върна в Париж… Всъщност ти вече си там, нали… Да, в оня Париж…)
Към днешна дата посвещавам този разказ и на загиналите при атентатите в Париж. Оказва се, че силите на злото нямат почивен ден, нито националност…Те просто искат да ни лишат от Париж. От оня “наш” , само наш си Париж, който всеки от нас носи в сърцето си. И без който сме мъртъвци приживе…
И още нещо – в тези “мъртви” дни в Париж аз се замислям – кои атентати всъщност са по-жестоки?
Тези, които през януари и сега вцепениха Париж, или онези, които обезкървяват бавно и мъчително въображението и чувствителността…
Бих искал да попитам татко. Но него вече го няма…
Мечтата, неосъществимата мечта на татко, тоест миньора Петко Димов Райков, сега пенсионер, да види Париж вече е осъществена.
Чудото, да, чудото стана!
Татко е на върха на Айфеловата кула. И Париж е под него…
В асансьора към върха на кулата татко не е уплашен. Напротив, като голямо дете излъчва любопитство, целият е сякаш един такъв омекотен… И наляво, и надясно казва може би единствената дума, която знае на френски – бонжур! Бонжур, бонжур… Хората наоколо и те – бонжур, бонжур… Вълшебната думичка, изречена с усмивка, да, непременно с усмивка…
Когато се изкачи на върха, о, Господи, на самия връх на Айфеловата кула, там, под звездите…, татко ме прегърна и промълви:
– Благодаря ти, синко, че ми даде възможност да видя нормален свят…
И едра сълза, може би тръгнала там, от рудниците на Странджа планина, се търкулна и полетя от високото, от звездите към… френската земя… Утеха ли търсеше тя, капката бистра и солена българска мъка, в нея, земята на свободата…
После, вече в подножието на кулата, татко се сви.
Оказа се, че имал от години херния, която сега се подлютила. Но как да ми каже? Та нали мечтата на мечтите му бе да се качи на Айфелката, както той галено наричаше това човешко, всъщност божие творение…
Как иначе той щеше да види него, нормалния свят… Който е бил скрит за очите му цял живот – пътуването до Париж бе първото в неговия живот зад граница…
– Диме – говори ми татко и ме гледа с едни големи, ама много големи и влажни очи… – Е, това, ако някой ми го беше казвал, да стигна тези години аз, сиракът, който цял живот съм живял в нямане… Че и да дойда в Париж… Да се кача на върха на Айфеловата кула… Аз, миньорът, дето цял живот съм блъскал под земята, сега да се кача на върха на Париж… И да видя … нормален свят… Затова искам аз да ви почерпя!
И татко черпи. Е, малко ли е – да дойде в Париж… Че и да се качи на върха на „Айфелката”…
– Това – казва татко – бе най-голямата ми мечта. Да се съберем, аз да почерпя…
Ние, най-близките около него, тоест неговото семейство, му викаме – а, защо ти ще черпиш, нека ние…
Не, татко е категоричен:
– Това е моето желание. Аз да ви почерпя в парижки ресторант…
Поръчваме, гледаме възможно по-икономично, да, тук ресторантите са много скъпи, особено пък за нас, българите…
Хапваме – о, колко е вкусно, майстори си остават французите на храната, особено на десертите… Най-доволни са внучката, тоест правнучката на татко и, разбира се, той самият… О, как само лапат те десерта… Пък и той пустият му шоколадов мус как се топи в устата… Така-а-а-а…, небцето ти изтръпва от сладост…
Свършваме с хапването.
Келнерът идва, татко придърпва листчето със сметката към себе си, вади банкноти…
За мен е учудващо – лицето на татко почти не трепва.
Но там, в самия му крайчец…
Колко позната ми е тази влага…
Весело ни е, хубаво, все пак живот…
Посред нощ усещам някакъв особен звук.
Ослушвам се – нещо подобно на хлипане, но плач без звук, по-скоро възглухо откъртване, идващо отвътре… Открехвам вратата на другата стая…
Да, татко не спи. Отивам при него.
Той ме гледа, гледа…
– Диме, чак днес си дадох сметка, че животът ми е минал все в чакане на заплата, все в кърпене на парите за месеца, все в притеснения дали ще стигнат… Чак сега разбрах, синко, че аз, а и майка ти, че и вие, май нищо не сме живели. Защото няма на този свят по-страшно нещо от бедността, от очакването всеки ден, всеки час да останеш без стотинка… Защо? Как така? Ето, аз, а и майка ти, цял живот сме работели честно, почтено, карфица не сме откраднали, всичко сме си плащали до стотинка… А винаги сме били бедни… Винаги са ни залъгвали, че сега няма пари, че сега е криза, че утре ще дойде мечтаното, все утре, утре… А виж как хората тук живеят… Ето, на теб ще ти призная. Плащах днес, ама сърцето ми плачеше… Плачеше, защото сметката бе… колкото пенсията ми… За едно хапване – цяла пенсия… Плачех, защото в този миг си припомних колко съм бил жесток към майка ти, а и към самия мене си, как не съм ѝ давал да си купи и един рокля като за пред хората, а пък тя бе жена, и то хубава жена… Така и си отиде, без да знае, че на света има Париж… Че има място, където няма само „стискане”, само пресмятане до стотинка… Че има място, където хората се радват, че живеят… Тя излезе един-единствен път зад граница, отиде в съседна Турция на екскурзия с другите жени, тръгнаха оттук весели, засмени… Дадох ѝ само двадесет лева да обмени, да си купи нещо, локум имали хубав турците… И така ѝ рекох – виж там да не харчиш много, да върнеш остатъка… Тя си дойде вечерта, беше купила кутия локум, останалите пари ги върна… Хапнали там от хлебеца, който си носели, от сиренцето, доматките от градината… Подпитвам я – нямат ли тамошните магазини стока? А, нямат, отвръща ми, имат и още как! От пиле мляко имат турците. Пълно е, претъркано е със стока. Ама как ще давам толкова пари за ядене, пък и другите жени и те с мене…
Да, така говореше татко, лицето му бе прежълтяло.
– Сега не ми се сърди за тези думи. Знам, че ще ме разбереш. Трябва ми време да „смеля” всичко това, да се „преобърне” – та как така ще дам аз цяла пенсия за едно хапване… Ти знаеш, синко, не съм чак толкова стиснат, ама така изведнъж… Аз бях подготвен, викам си, майната им, какво толкова, обаче тя излезе голяма сметката… Не ми се сърдиш, нали? От сърце ги дадох, но сърцето ме заболя, Димчо, разбираш защо, нали, синко… Затова не ми се сърди, остави ме да поплача… За изгубения ни живот да поплача… Ще ми мине, ама… Кой и защо ни наказа така… Цял живот бедни, че да носим бедността и в главата си…
Да, така завърши своята своеобразна и спонтанна изповед татко.
Гледам любимото лице.
И си мисля – от това татково гостуване разбрах, че всичките тези приказки за някаква генна предопределеност, за това, че „роденият да пълзи, цял живот ще пълзи” и тем подобни неща не струват пукната пара и биват разбити на пух и прах. Стига само човек внимателно да наблюдава хора като моя татко. Той, беднякът, сиракът, роденият поради капризите на съдбата в едно нищо и никакво селце там, в онази почти неизвестна за европееца планина Странджа, намираща се половината в България, половината – в Турция, живял до 81 години в тотална, контролирана мизерия, все в очакването на оня, мечтания ден „утре”, който, разбира се, никога така и не идва на онези географски прокълнати ширини, сега, изведнъж, попаднал в нормални условия на живот, при нормални взаимоотношения между нормални хора, се оказва, че е може би по-нормален и от най-нормалните французи…
Няма е вече прежната нервност и изпитост на лицето му, няма го страхът в очите му… Един нормален, възрастен мъж…
Случи се така, че в дните на татковото посещение в Париж се озовахме двамата на едно соаре, тоест почерпка-коктейл. В интерес на истината татко се дърпаше, къде ще те излагам, синко, иди и си свърши работата, а пък аз ще те почакам отвън… Милият… какво ти отвън? Та студът не се търпеше, макар и в Париж…И, разбира се, татко се озова вътре, там, при „шикозните”, там, при нормалните хора…
И поведението на татко бе такова, че когато накрая се разотивахме, при нас дойде един от организаторите, който ме познаваше и ме попита: „Мосю Райков, кой е господина с Вас? Аз бих искал да се запозная с него. За първи път в своя живот виждам такова интелигентно излъчване, непосредствено, деликатно, а бе, френско, но френско от ония времена, които и ние, потомствените французи, вече отдавна позабравихме…”
Невероятно, но факт, нали?
Какво показва всичко това?
Да, теорията на онези за генното начало е несъществена. Но и още нещо – грях носят тези, които са поставили в условия на бедност, унизителна и тотална, хора като татко. А такива са доста от нас, българите. Доста ли?
И какво излиза?
Че ако татко бе живял в условията на съществуване на французите, например, той би имал съвсем друга съдба, той би станал друг и би дал съвсем друго на своята страна…
И кой е виновен за това?
Кой в България предопределя вече десетилетия кой да бъде „европеец” и кой слуга?
И по какъв критерий?
Мисли, мисли, мисли…
Защо тъкмо тук те ме връхлитат?
Тази картина винаги ще е пред очите ми…
Татко е на върха на Айфеловата кула…
Виждам, че сълзи се търкулват по скулестото му лице. Нищо не го питам.
И той нищо не ми казва.
Просто – татко плаче…Там, на върха на Айфеловата кула.
Мълчаливи, гъсти сълзи-вадички, които се оттичат по любимото лице…
За какво ли може да плаче един възрастен, видял и патил мъж като татко?
И то там, в Париж, на върха на Айфеловата кула, символа на свободния, на ОНЯ, нормалния свят… До който моите българи, а и татко само допреди дни, пътуваха но единствено възможния начин – чрез въображението си – лягаш, завиваш се през глава и… натам, натам…
Времето на татко в Париж свършва.
И аз го връщам в България.
Мрачно, сиво, безнадеждно…
Минават хора, тоест наши сънародници.
Татко ги поздравява.
Разбира се, не с „бонжур”, ами с еквивалента му на български – добър ден!
– Добър ден, добър ден…
Поглеждам татко. И изтръпвам.
Лицето му е придобило отново прежния си вид – изострено, с прежълтял оттенък по краищата… И оня страх в очите…
Прегръщаме се – татко притиска глава към гърдите ми:
– Диме, нали ще дойдеш пак. И тогава ще ме вземеш. Ама завинаги! Нали, Диме?
Буцата ме задавя.
Пътувам. Обратно за Париж.
Заедно с буцата. И с въпросите – кой и защо открадна живота на татко? Кой и защо открадна и нашия живот?
Да, пак въпросите…
Самолетът се приземява.
Изведнъж светлини, рояк светлини избухнаха пред очите ми.
Да, светлините на Париж…
Всъщност Градът на Светлината. И на… Айфеловата кула.
Тоест, градът на … баща ми, нали? И на „моите” българи. Които вечер „пътуват” – просто се завиват презглава и тръгват.
Да, натам, натам…
[край на откъса]
* – Важно е за нас, българите, че има хора като Димо Райков, които си “позволяват” да нарушават спокойствието на народа. За нашенеца нищо няма значение, освен собственото му лениво и сънливо съществувание. Защо наричат хората като Димо Райков “будители”? Защото народът спи. Или, както казва Левски пред съда: “Нашите българи желаят свободата, но приемат я, ако им се поднесе в къщите на тепсия”. Ако се замислим, ще видим, че Левски е преувеличил желанието на нашенеца за свобода. Нашенци вече разбират свободата като възможността да седят пред телевизора със салатка, ракийка или поне късче корав хляб и… да гледат сериали. А хора като Димо Райков са се хванали с най-неблагодарната – и едновременно с това най-благородната – задача: да будят и просвещават нашите българи (б.м. – Вени Марковски).
Following several months of pressure, ICANN has revealed a breakdown of figures under its catch-all term of "professional services," exposing its political expenses, Kieren McCarthy reported today in the Register. He writes: "ICANN has spent $2.5m in the past year lobbying the US government, putting the small non-profit on a par with multi-national corporations. The figure is five times larger than the organization has previously admitted to. It emerged after ICANN was repeatedly asked to reveal the true amount it was spending on professional lobbyists in its bid to take over the internet's critical IANA functions — that's the heart of the global DNS, worldwide IP address allocation, and management of communication protocol details."
Follow CircleID on Twitter
More under: ICANN
Beijing and leading Chinese tech firms are collaborating to build a secure smartphone for government officials that rely on domestically built operating system and processor chip, according to reports. The move is part of China's effort to construct its own uncrackable smartphones in an attempt to evade U.S. surveillance programs. While currently lagging in microchip developments, China has been making aggressive moves in recent months to catch up. Earlier this year, the state-owned Chinese chipmaker Tsinghua Unigroup reportedly made a $23 billion bid to acquire U.S. chipmaker Micron Technology.
Follow CircleID on Twitter