Content Networking, or Content-centric Networking as PARC are now calling it, is an architecture which Van Jacobsen has been promoting for some years now. It's worth attention, both because of Van's reputation and because it's a very interesting answer to the question "knowing what we know now about what people want and what capabilities can be provided, how should networking and computing be structured ?"
"Content-centric networking is PARC's vision for taking the next step in data communication — a change in network architecture to make content retrieval by name, not location, the fundamental operation of the network. Our approach is to reuse and build upon successful features of TCP/IP, with the key change of replacing the machine-oriented IP model with a named content model as the basis for the central protocol that connects networks."
If networks work natively in terms of content – in other words, the name a user would express to the network to retrieve a piece of content refers in some way to that content item itself, rather than to some specific host on which the content can be found, then users can securely retrieve desired content by name, and authenticate the result regardless of where it comes from. (Refer to the paper on Securing Network Content)
Pere Monclus, with whom I used to work when at Cisco, talked at PARC a couple of weeks ago about how difficult Cisco finds it to implement support for new stuff like this.
23 Nov 2010 - Updated
As Named Data Networking, there is NSF funding for further research into CCN. The PARC technical report NDN-001 (Oct 2010) describes the project in more detail.
PARC's content-networking page Discussion between Van Jacobsen and Craig Partridge ( free, despite being at acm.org)
Securing Network Content pdf
Other NDN resources
Ray Ozzie's 'dawn of a new day' post, on leaving Microsoft, is worth reading if you haven't already seen it.
It's a good snapshot of a broad set of things going on in the computing and networking space. There's a useful model for understanding how many of the important components being built will fit together :
" a world of
cloud-based continuous services that connect us all and do our bidding, and
appliance-like connected devices enabling us to interact with those cloud-based services."
He predicts that this is the correct model for the next 5 years: particularly, that apps and browsers will enable simplicity for the end user, so that they really don't have to understand how the car works in order to drive it.
Most of what he's saying could be regarded as obvious, though it is worth noticing that he thought to say what he has to the executive staff at Microsoft ...
He explicitly calls out embedded devices as a subset of connected devices.
My thought is that there is an opportunity here, to create " appliance-like, easily configured, interchangeable and replaceable without loss" devices. The design and software requirements for such things, though more relaxed than they were in the first era of microprocessor process control, require a more precise discipline than is necessary for application software to be directly used by humans. There will be constraints on available power, space, operating temperature, network bandwidth, and connection capability. Toolsets to support development of this kind are still expensive and complex compared to normal software development environments.
Hardware platforms exist, and prices are dropping rapidly, driven by the huge volume requirements of the low end of the mobile phone platforms.
For an example confirming the other part of the model, look at the growth of Google's traffic as seen from the public Internet - the majority of this is video, with Youtube as the service. Craig Labovitz (Arbor Networks) estimates Google's traffic volume as being between 6% and 12% of all the Internet traffic in the world, and it's still growing.
Ray Ozzie's letter, on leaving Microsoft, 28 October 2010
Google's traffic as a percentage of all Internet traffic
Paul Kedrowsky (Kauffman Foundation) noted the similarity of the new power grid to the public Internet in a talk to the ARPA-e meeting this week. There are now many low wattage power sources (low compared to the total power output from the major power stations) which could be controlled as a multi-cast mesh. Smart meters measure power factor - the quality and efficiency of power distribution between the substation and consumers could be improved with more granular control over capacitor banks and local disconnects.
There are some pilot implementations and trials in progress - SmartGridNews has a good list. The usual model is to use 900 MHz wireless for control system signaling and data collection - this is adequate for the volume of data generated. Other systems use the cellular wireless service. If fiber was to be pulled or blown alongside the existing power lines, it would be possible to create an additional local loop, competing with the existing telecom, cable, and wireless providers.
As previously noted, this market is at about the same stage of development as the telecommunications market was 20 years ago. Systems architecture and standards work could create at least the same scale of opportunities for innovation, while learning from the experience of developing that market.
Last week's North American Operators Group meeting was in Atlanta - without a sponsor, I don't attend the meetings in person. However, the subject matter is important, so I listen to the talks. This event had better quality audio and video than on previous occasions - the feeds available were the same, but the consistency with which they were available was much better. I used the Live HD VLC direct link at http://hidef.nanog.org:8080
Richard Steenbergen's talk on 4 October reviewed changes in the size of the Internet global routing table, February - September 2010, possible reasons for the growth, and ways to reduce the size of the table (so as to be able to carry traffic from more places while reducing the processing and memory requirements on the routers).
This is one of those topics where there are only a few hundred people in the world, if that many, who understand the dynamics of the global routing table, and they are retiring faster than new people are coming in to the business. Of those few hundred, only a handful are capable of developing new protocols, or improving on the existing, rather fragile, state of affairs.
Traffic engineering - controlling which packets take what paths into and out of ISPs - requires more specific addresses, which make more work. There are clear signs of incompetence, too - some small countries, with very few physical routes, are deaggregating much more than can possibly be useful. Geoff Huston pointed out that contrary to intuition, his measurements indicate that more BGP routes do not lead to more BGP churn. Chris Morrow suggested that there are very few tools to help do this well - more education and more protocol work would both help.
Phil Smith's BGP tutorials have background and pointers for how to get started (see the Agenda)
On 5 October, Greg Hankins from Brocade put together a wide ranging review on the current state of Ethernet .. there's a good list of 40G and 100G physical layer specifications for different distances; an overview of IEEE 802.3ax-2010 Energy Efficient Ethernet, just approved; useful remarks on the status of MPLS and OAM for Carrier Ethernet; some good diagrams describing what Cisco called Data Center Ethernet and is now called Data Center Bridging, for carrying SAN and LAN traffic on the same Ethernet; and a summary on the replacements for Spanning Tree, comparing TRILL and Shortest Path Bridging. All this in 59 slides and 30 minutes.
Brian Martin from CERN (Geneva) described how they monitor the network supporting one of the experiments at the Large Hadron Collider. Graphical representation for various levels of detail for 8000 ports is necessary, but difficult - they built a hierarchical model.
Don Lee described the experimental LISP (Locator ID/Separation Protocol - Dino Farinacci's work) implementation he's done for IPv4 and IPv6 at Facebook - despite Facebook's scale, the amount of configuration work was small, and the installation time short. Planning and design took longer.
The great majority of the presentation material is linked to the Agenda, and the video usually gets posted to the meeting archives - it's not there yet, as of 11 October.
The next meeting is in Miami, January 30 to February 2, 2011
The Entrepreneurial Thought Leader weekly speaker series has started at Stanford - last week's speaker was the CEO of Hara, Amit Chaterjee. Hara sells environmental and energy business process management. Only about 15% of their revenue comes from custom services; the rest is from the sale of their EEM Saas application. The company has taken $20m in funding, principally from Kleiner Perkins. Approximately 2/3rd of their customers are commercial companies - the main motivation for them is to save money, by understanding their usage of power and water, and minimizing that where possible. 1/3rd of their customers are city or state government bodies - their primary motivation is to be able to demonstrate that they are doing something. The inputs to the process come from reviewing utility bills, comparing usage to what is usual for other businesses with similar requirements. Where it is possible to obtain dynamic measurements from smart meters (in this business, dynamic means making a measurement every 15 minutes), they will incorporate that into their process. Siebel and SAP are competitors.
Steve Blank and Tom Byers spoke briefly.
The series is being held in an auditorium downstairs in the almost complete new Engineering buildings - as usual at Stanford, parking is difficult or expensive or both, so come by bicycle if you can.
DEMO Fall happened last week - since it was at the Santa Clara Convention center, 20 minutes away, I signed up. My impression from the pre-show material was that it was very oriented towards small software startups hoping to be picked up by investors with the eventual target customer being big enterprises - and there was a good deal of that. There were some interesting infrastructure ideas. A noticeable fraction of the audience was international. According to Matt Marshall, the producer of the show, 800 people registered, 200 of them in the last few days - my rough headcount in the main hall on Wednesday came to about 500. The hall was well filled, with enough room to find a seat if you came in late.
The agenda has links to video of the live demos given by each of the attending companies. Of most interest to me were :
The people with the best pictures were Vizerra (company name 3DreamTeam, though their business cards say 'Vizerra'). It's a software platform for building 3D representations - they've started with World Heritage sites. Their demo also had clips from a simulation done for a helicopter vendor with a rendering of the interiors as well as the flying experience. The development team is located in Moscow.
These people brought a working cell phone base station to the show - in a 19" rack mount unit about 4U high. They just provided service at Burning Man, using a 60W solar powered battery backed unit. There's an embedded software defined radio, adaptable to assorted multinational frequencies. Target price is $10000 per base station, supporting IP - enabling cellphone service for $2/month per user. There's open source software available for it, as well as proprietary supported software.
If you like the iPad format, but can't be constrained by Apple, look at the Touch Book - it's a tablet, with a robust detachable keyboard, with its own OS on an ARM processor and support for assorted Linux distributions.
Had two of its hand built prototypes working at the show. The goal is to sell a device which will sit or hang in the kitchen or family room, and provide an always on window to somewhere else, requiring no expertise from the user. The initial target audience is people who want reassurance about remote family and friends, without having to press buttons or make a call.
This is a proposal for a synchronous layer 2 framing protocol, to replace Ethernet, and a Distributed Queue Switch Architecture to use the framing. The company has far too many ideas about how it might be used. They say they are going to open source an implementation.
There were several on stage interviews. Jack Dorsey (Twitter founder) talked about what he'd learned there. Square, his most recent venture, is a payments service - so requires a very different reliability model to Twitter. It has been heavily instrumented for measurement analytics (unlike Twitter, which initially had no measurement or monitoring). Square is bringing on customers gradually as it gains understanding of its scaling issues.
Asked to say what he expected to be the next big thing, he suggested that in a year or two, preventative health care would be an interesting market.
20 years ago, large national telecommunications companies still dominated the voice and data market, despite deregulation during the previous decade. Cisco Systems was a small router company, competing with Wellfleet and Proteon.
The energy business now has distinct similarities to the telecommunications business as it was then - feeling some effect of deregulation, but still heavily controlled by assorted country and local government agencies. Emissions trading and carbon offsets, combined with subsidies for alternative fuel sources, have led to widespread installation of solar panels and wind farms, more so proportionate to population in Europe than in the US. At the consumer level, it is possible, although not economic compared to on-grid prices, to generate electricity.
The data communications protocols and standards environment was very different 20 years ago - neither TCP/IP nor Ethernet were clearly dominant (we had DECnet, IPX, Appletalk, LAT, token ring, FDDI, with Frame Relay and ATM still to come). Protocols and standards for energy network control are also in about the same state as the data communications protocols of 20 years ago - many proprietary systems, the utilities expect custom implementations to match their operating requirements, standards are incomplete, interoperability is patchy.
Last week, one year after the formation of the Smart Grid business unit, Cisco announced a partnership with Itron, which makes energy and waters meters. The following day Cisco announced it intended to acquire Arch Rock, which makes wireless sensor technology, "focusing on energy and environmental monitoring and Smart Grid applications".
"Arch Rock will accelerate Cisco's ability to facilitate the utility industry's transition to an open and interoperable smart grid by enabling Cisco to offer a comprehensive and highly secure advanced metering infrastructure solution that is fully IP and open-standards based."
Speculation - will Cisco be able to sell huge volumes of equipment to new players in the energy markets as it did to the new players in the telecommunications markets ? Or will some other, smaller, more agile, company emerge, analogous to the Cisco of 20 years ago ?
Updated to add: Cisco sees a huge market, and promotes RPL ..
Cisco to acquire Arch Rock PR
Cisco partners with Itron PR
Global market - IC article
Cisco's Smart Grid plans GigaOm
Greentech Media notices AMI implications of acquisition
Cisco on creating the Internet of Things using RPL
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at firstname.lastname@example.org
Musemantik is a very early stage company located in Edinburgh, building middleware which infers emotion from games during design and at runtime. Fear, excitement, curiosity and other measured emotions are used to control music, lighting and camera viewpoint to increase emotional engagement and cohesion.
There will be a demonstration of the technology at the screening sessions on 25 August 2010 at the Edinburgh Interactive Festival (Filmhouse, Lothian Road, Edinburgh)
Maciej Zarawski and Diwakar Thakore are the co-founders - combining technology and business backgrounds.
My interest in this technology stems from studying Artificial Intelligence at what is now the Informatics department at Edinburgh, combined with a long term interest in music and the potential for improving immersive experiences. Advising a team located in Scotland from a Silicon Valley base creates certain challenges - but it's no more difficult than the remote acquisition work I did at Cisco Systems.
Follow-the-sun used to be about who was working when - most commonly for customer support. Beyond a certain number of calls and customers, particularly if the topic is not complex, it is cheaper to have people working in prime time in their time zone than to run two or three shifts in one timezone. So Cisco System's first call centers emerged in Sydney and Amsterdam, so technical support would be at UTC -8 (Menlo Park,CA) UTC +10, and UTC +1 (modulo daylight savings).
There's a new use for the same phrase - computing tasks are moved to a different geographical location to take advantage of differential electrical power costs. Solar power and wind power are variable by nature. Being able to use them close to the generating source reduces the transmission losses from moving the power to where it is being used.
In iSGTW this week "Canada’s Green Star Network aims to demonstrate that by allowing the computations to follow the renewable energy across a large, fast network, the footprint of high-throughput computing can be drastically reduced."
Google already has the capability to do this, as a side effect of the overall engineering design for their infrastructure (characterized as "design for failures"). Newer data centers in some parts of the world can be built without chillers, so that cooling uses only the outside air. On the occasional hot day, the effective compute load for that location goes down, and bulk machine to machine traffic can be moved. This could be described as follow-the-cloud,to find cooling capability, rather than follow-the-sun to find power.
Moving computation to where there is cheap sun or wind power sounds like a 'why didn't I think of that' idea - until some of the necessary details are considered.
What's the actual thing being moved ? An application and all its associated data ? A virtual machine ? Using which operating system and hypervisor ? Greg Pfister has 4 posts describing hardware virtualization in detail - the memory management issues when moving a VM over a significant distance (and therefor over many milliseconds of time) bear consideration.
Bandwidth - Google owns a lot of its own fiber, so can run many 10Gbps (now, 40Gbps and 100 Gbps soon) streams on its WDMs. Greenstar will have access to the Canarie research network, which has 10Gbps on each wavelength.
Availability - Design for high availability system wide requires keeping spare capacity - for computing, for power, for connectivity. Designing a central control point rather than a distributed system seems easier, but doesn't scale well.
Measurement - technologies to measure where power is actually available, and models to price optimize its use, are in their infancy. Certain consumer and industrial use patterns vary consistently by time of day and time of year, but reacting on, say, 15 minute time scales at anything other than very coarse granularity (like turning up and down a gas fired power station) isn't something regulated utilities find easy. For computing load, Amazon AWS publishes 'service health', but not capacity.
These are just a few of the issues which will need to be addressed. Standards are in progress for moving data between 'clouds', whether private or public clouds. It is possible to quickly turn up computing capability at different data centers; since data centers typically buy power on long term contracts there is no variable price signal to use to affect a location decision.
Summary : relocatable computing to follow the sun or follow the wind could happen, particularly if the price of power stays high enough to encourage the necessary development, but there are many interacting details, in systems which are currently opaque to their purchasers. Developing Platforms as a Service (PaaS) to support relocation for general purpose computing is a long term project.
iSGTW article on the GreenStar network
Cybera (Alberta infrastructure research institution) description of its participation in and use of the GreenStar networks
Google infrastructure talk at Nanog 49
Hardware virtualization, Greg Pfister
AWS status reporting
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at email@example.com