Paul Kedrowsky (Kauffman Foundation) noted the similarity of the new power grid to the public Internet in a talk to the ARPA-e meeting this week. There are now many low wattage power sources (low compared to the total power output from the major power stations) which could be controlled as a multi-cast mesh. Smart meters measure power factor - the quality and efficiency of power distribution between the substation and consumers could be improved with more granular control over capacitor banks and local disconnects.
There are some pilot implementations and trials in progress - SmartGridNews has a good list. The usual model is to use 900 MHz wireless for control system signaling and data collection - this is adequate for the volume of data generated. Other systems use the cellular wireless service. If fiber was to be pulled or blown alongside the existing power lines, it would be possible to create an additional local loop, competing with the existing telecom, cable, and wireless providers.
As previously noted, this market is at about the same stage of development as the telecommunications market was 20 years ago. Systems architecture and standards work could create at least the same scale of opportunities for innovation, while learning from the experience of developing that market.
Last week's North American Operators Group meeting was in Atlanta - without a sponsor, I don't attend the meetings in person. However, the subject matter is important, so I listen to the talks. This event had better quality audio and video than on previous occasions - the feeds available were the same, but the consistency with which they were available was much better. I used the Live HD VLC direct link at http://hidef.nanog.org:8080
Richard Steenbergen's talk on 4 October reviewed changes in the size of the Internet global routing table, February - September 2010, possible reasons for the growth, and ways to reduce the size of the table (so as to be able to carry traffic from more places while reducing the processing and memory requirements on the routers).
This is one of those topics where there are only a few hundred people in the world, if that many, who understand the dynamics of the global routing table, and they are retiring faster than new people are coming in to the business. Of those few hundred, only a handful are capable of developing new protocols, or improving on the existing, rather fragile, state of affairs.
Traffic engineering - controlling which packets take what paths into and out of ISPs - requires more specific addresses, which make more work. There are clear signs of incompetence, too - some small countries, with very few physical routes, are deaggregating much more than can possibly be useful. Geoff Huston pointed out that contrary to intuition, his measurements indicate that more BGP routes do not lead to more BGP churn. Chris Morrow suggested that there are very few tools to help do this well - more education and more protocol work would both help.
Phil Smith's BGP tutorials have background and pointers for how to get started (see the Agenda)
On 5 October, Greg Hankins from Brocade put together a wide ranging review on the current state of Ethernet .. there's a good list of 40G and 100G physical layer specifications for different distances; an overview of IEEE 802.3ax-2010 Energy Efficient Ethernet, just approved; useful remarks on the status of MPLS and OAM for Carrier Ethernet; some good diagrams describing what Cisco called Data Center Ethernet and is now called Data Center Bridging, for carrying SAN and LAN traffic on the same Ethernet; and a summary on the replacements for Spanning Tree, comparing TRILL and Shortest Path Bridging. All this in 59 slides and 30 minutes.
Brian Martin from CERN (Geneva) described how they monitor the network supporting one of the experiments at the Large Hadron Collider. Graphical representation for various levels of detail for 8000 ports is necessary, but difficult - they built a hierarchical model.
Don Lee described the experimental LISP (Locator ID/Separation Protocol - Dino Farinacci's work) implementation he's done for IPv4 and IPv6 at Facebook - despite Facebook's scale, the amount of configuration work was small, and the installation time short. Planning and design took longer.
The great majority of the presentation material is linked to the Agenda, and the video usually gets posted to the meeting archives - it's not there yet, as of 11 October.
The next meeting is in Miami, January 30 to February 2, 2011
The Entrepreneurial Thought Leader weekly speaker series has started at Stanford - last week's speaker was the CEO of Hara, Amit Chaterjee. Hara sells environmental and energy business process management. Only about 15% of their revenue comes from custom services; the rest is from the sale of their EEM Saas application. The company has taken $20m in funding, principally from Kleiner Perkins. Approximately 2/3rd of their customers are commercial companies - the main motivation for them is to save money, by understanding their usage of power and water, and minimizing that where possible. 1/3rd of their customers are city or state government bodies - their primary motivation is to be able to demonstrate that they are doing something. The inputs to the process come from reviewing utility bills, comparing usage to what is usual for other businesses with similar requirements. Where it is possible to obtain dynamic measurements from smart meters (in this business, dynamic means making a measurement every 15 minutes), they will incorporate that into their process. Siebel and SAP are competitors.
Steve Blank and Tom Byers spoke briefly.
The series is being held in an auditorium downstairs in the almost complete new Engineering buildings - as usual at Stanford, parking is difficult or expensive or both, so come by bicycle if you can.
DEMO Fall happened last week - since it was at the Santa Clara Convention center, 20 minutes away, I signed up. My impression from the pre-show material was that it was very oriented towards small software startups hoping to be picked up by investors with the eventual target customer being big enterprises - and there was a good deal of that. There were some interesting infrastructure ideas. A noticeable fraction of the audience was international. According to Matt Marshall, the producer of the show, 800 people registered, 200 of them in the last few days - my rough headcount in the main hall on Wednesday came to about 500. The hall was well filled, with enough room to find a seat if you came in late.
The agenda has links to video of the live demos given by each of the attending companies. Of most interest to me were :
The people with the best pictures were Vizerra (company name 3DreamTeam, though their business cards say 'Vizerra'). It's a software platform for building 3D representations - they've started with World Heritage sites. Their demo also had clips from a simulation done for a helicopter vendor with a rendering of the interiors as well as the flying experience. The development team is located in Moscow.
These people brought a working cell phone base station to the show - in a 19" rack mount unit about 4U high. They just provided service at Burning Man, using a 60W solar powered battery backed unit. There's an embedded software defined radio, adaptable to assorted multinational frequencies. Target price is $10000 per base station, supporting IP - enabling cellphone service for $2/month per user. There's open source software available for it, as well as proprietary supported software.
If you like the iPad format, but can't be constrained by Apple, look at the Touch Book - it's a tablet, with a robust detachable keyboard, with its own OS on an ARM processor and support for assorted Linux distributions.
Had two of its hand built prototypes working at the show. The goal is to sell a device which will sit or hang in the kitchen or family room, and provide an always on window to somewhere else, requiring no expertise from the user. The initial target audience is people who want reassurance about remote family and friends, without having to press buttons or make a call.
This is a proposal for a synchronous layer 2 framing protocol, to replace Ethernet, and a Distributed Queue Switch Architecture to use the framing. The company has far too many ideas about how it might be used. They say they are going to open source an implementation.
There were several on stage interviews. Jack Dorsey (Twitter founder) talked about what he'd learned there. Square, his most recent venture, is a payments service - so requires a very different reliability model to Twitter. It has been heavily instrumented for measurement analytics (unlike Twitter, which initially had no measurement or monitoring). Square is bringing on customers gradually as it gains understanding of its scaling issues.
Asked to say what he expected to be the next big thing, he suggested that in a year or two, preventative health care would be an interesting market.
20 years ago, large national telecommunications companies still dominated the voice and data market, despite deregulation during the previous decade. Cisco Systems was a small router company, competing with Wellfleet and Proteon.
The energy business now has distinct similarities to the telecommunications business as it was then - feeling some effect of deregulation, but still heavily controlled by assorted country and local government agencies. Emissions trading and carbon offsets, combined with subsidies for alternative fuel sources, have led to widespread installation of solar panels and wind farms, more so proportionate to population in Europe than in the US. At the consumer level, it is possible, although not economic compared to on-grid prices, to generate electricity.
The data communications protocols and standards environment was very different 20 years ago - neither TCP/IP nor Ethernet were clearly dominant (we had DECnet, IPX, Appletalk, LAT, token ring, FDDI, with Frame Relay and ATM still to come). Protocols and standards for energy network control are also in about the same state as the data communications protocols of 20 years ago - many proprietary systems, the utilities expect custom implementations to match their operating requirements, standards are incomplete, interoperability is patchy.
Last week, one year after the formation of the Smart Grid business unit, Cisco announced a partnership with Itron, which makes energy and waters meters. The following day Cisco announced it intended to acquire Arch Rock, which makes wireless sensor technology, "focusing on energy and environmental monitoring and Smart Grid applications".
"Arch Rock will accelerate Cisco's ability to facilitate the utility industry's transition to an open and interoperable smart grid by enabling Cisco to offer a comprehensive and highly secure advanced metering infrastructure solution that is fully IP and open-standards based."
Speculation - will Cisco be able to sell huge volumes of equipment to new players in the energy markets as it did to the new players in the telecommunications markets ? Or will some other, smaller, more agile, company emerge, analogous to the Cisco of 20 years ago ?
Updated to add: Cisco sees a huge market, and promotes RPL ..
Cisco to acquire Arch Rock PR
Cisco partners with Itron PR
Global market - IC article
Cisco's Smart Grid plans GigaOm
Greentech Media notices AMI implications of acquisition
Cisco on creating the Internet of Things using RPL
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at firstname.lastname@example.org
Musemantik is a very early stage company located in Edinburgh, building middleware which infers emotion from games during design and at runtime. Fear, excitement, curiosity and other measured emotions are used to control music, lighting and camera viewpoint to increase emotional engagement and cohesion.
There will be a demonstration of the technology at the screening sessions on 25 August 2010 at the Edinburgh Interactive Festival (Filmhouse, Lothian Road, Edinburgh)
Maciej Zarawski and Diwakar Thakore are the co-founders - combining technology and business backgrounds.
My interest in this technology stems from studying Artificial Intelligence at what is now the Informatics department at Edinburgh, combined with a long term interest in music and the potential for improving immersive experiences. Advising a team located in Scotland from a Silicon Valley base creates certain challenges - but it's no more difficult than the remote acquisition work I did at Cisco Systems.
Follow-the-sun used to be about who was working when - most commonly for customer support. Beyond a certain number of calls and customers, particularly if the topic is not complex, it is cheaper to have people working in prime time in their time zone than to run two or three shifts in one timezone. So Cisco System's first call centers emerged in Sydney and Amsterdam, so technical support would be at UTC -8 (Menlo Park,CA) UTC +10, and UTC +1 (modulo daylight savings).
There's a new use for the same phrase - computing tasks are moved to a different geographical location to take advantage of differential electrical power costs. Solar power and wind power are variable by nature. Being able to use them close to the generating source reduces the transmission losses from moving the power to where it is being used.
In iSGTW this week "Canada’s Green Star Network aims to demonstrate that by allowing the computations to follow the renewable energy across a large, fast network, the footprint of high-throughput computing can be drastically reduced."
Google already has the capability to do this, as a side effect of the overall engineering design for their infrastructure (characterized as "design for failures"). Newer data centers in some parts of the world can be built without chillers, so that cooling uses only the outside air. On the occasional hot day, the effective compute load for that location goes down, and bulk machine to machine traffic can be moved. This could be described as follow-the-cloud,to find cooling capability, rather than follow-the-sun to find power.
Moving computation to where there is cheap sun or wind power sounds like a 'why didn't I think of that' idea - until some of the necessary details are considered.
What's the actual thing being moved ? An application and all its associated data ? A virtual machine ? Using which operating system and hypervisor ? Greg Pfister has 4 posts describing hardware virtualization in detail - the memory management issues when moving a VM over a significant distance (and therefor over many milliseconds of time) bear consideration.
Bandwidth - Google owns a lot of its own fiber, so can run many 10Gbps (now, 40Gbps and 100 Gbps soon) streams on its WDMs. Greenstar will have access to the Canarie research network, which has 10Gbps on each wavelength.
Availability - Design for high availability system wide requires keeping spare capacity - for computing, for power, for connectivity. Designing a central control point rather than a distributed system seems easier, but doesn't scale well.
Measurement - technologies to measure where power is actually available, and models to price optimize its use, are in their infancy. Certain consumer and industrial use patterns vary consistently by time of day and time of year, but reacting on, say, 15 minute time scales at anything other than very coarse granularity (like turning up and down a gas fired power station) isn't something regulated utilities find easy. For computing load, Amazon AWS publishes 'service health', but not capacity.
These are just a few of the issues which will need to be addressed. Standards are in progress for moving data between 'clouds', whether private or public clouds. It is possible to quickly turn up computing capability at different data centers; since data centers typically buy power on long term contracts there is no variable price signal to use to affect a location decision.
Summary : relocatable computing to follow the sun or follow the wind could happen, particularly if the price of power stays high enough to encourage the necessary development, but there are many interacting details, in systems which are currently opaque to their purchasers. Developing Platforms as a Service (PaaS) to support relocation for general purpose computing is a long term project.
iSGTW article on the GreenStar network
Cybera (Alberta infrastructure research institution) description of its participation in and use of the GreenStar networks
Google infrastructure talk at Nanog 49
Hardware virtualization, Greg Pfister
AWS status reporting
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at email@example.com
Even Greylock has Valley envy - they are a very long established VC firm, who used to be firmly based in Boston. Next month they are moving their headquarters to Palo Alto.
Reason - lack of critical mass (presumably, of companies in whom they already invest and of new targets). Henry McCance, Greylock's chairman emeritus,interviewed in Xeconomy, said "Boston has virtually no important entrants in arguably the most important information technology segment of the last 20 years, in spite of the preeminent universities of Harvard and M.I.T.”
For companies designing new hardware, the Valley has a huge depth and breadth of people building ASICs, companies who can translate a functional specification into a PCB, companies who can source small volumes of components, companies who will do preliminary and or final assembly and test, and excellent air freight service out of San Jose or Oakland. There's a well established path between here and China for scaling up operations; good operations people here have many years of experience in setting up supply chains and managing remote assembly lines.
It isn't so obvious why the big software companies are here: Facebook could have stayed in Harvard, Google could have stayed in Michigan or Maryland - except that the Valley also has a great breadth and depth in software people, as well as all the business support structure - short leases for office space are normal, law firms are used to helping startups now for very little to get the opportunity for more lucrative business later.
Right now, the weather is great too - it makes many things more predictable if there is no rain between May and September. It's been cool enough not to need air conditioning, if the building can be ventilated with outside air.
Like many other parts of the world, Scotland would like to do a better job of creating companies which grow to be as successful as the big companies in the Valley. I created a LinkedIn group to discuss what might improve the business environment there - contact me at firstname.lastname@example.org if you want to participate. Realistically, though, the Valley has, say, 50 years head start (Hewlett Packard founded in 1939, Fairchild Semiconductor founded 1957). Now that the Internet (Cisco and Juniper combined have more than 80% market share of the core routers sold - they both have headquarters in the Valley) has made communication so much easier, the way forward needs to be to understand where Scots have a comparative advantage. Take lessons from the Valley about the business environment and be prepared to compete to keep the potential company builders based locally, or to get them to return to apply experience gained elsewhere.
Updated to add : Bradford Cross has a good historical perspective, looking back to the California Gold Rush and developments in ham radio, to explain why Silicon Valley is what it is. "Dramatic cultural change without an overwhelming catalyst or crisis is not something that happens in the span of a business cycle, it is something that happens in the course of one or more human lifetimes."
7 September - adding reference to Margaret O'Mara's article in Foreign Policy. Words of advice for foreigners looking to emulate the Valley : you need to attract the world's most talented people, and it's a global competition. Infrastructure, transport, clean air all matter. Tax breaks, low barriers to foreign investment, and encouraging immigration have been demonstrated to help.
Compared to many places, Scotland's weather is benign - although it is wetter and colder than in the Valley.
6 November - updated to add reference to the history of the Valley from the perspective of the lawyers involved. Links to an interview with Larry Sonsini. I've been a customer of both Wilson Sonsini, and Cooley. Long ago I installed routers for the London office of Brobeck, when they were Cisco's lawyers.
2 March 2012 - adding reference to Jenny Jung, @jung , on being a German visitor to Silicon Valley.
Cunning Systems is located in Los Altos, near Stanford University, HP, Google, and Cisco Systems.
The trip starts by getting on to Moffett Field - which has security at the gate, so have photo ID for each person to hand. Signposting is good - check in is at building 20. Coffee, juice and scones are available before departing - they don't understand tea. Here's the door to the building, which Airship Ventures shares with the Singularity University.
Airship Ventures have their base at NASA AMES, which is about 5 miles from where we live. Given the offer of a balloon trip as a birthday present, I substituted a Zeppelin trip. The current plan is to go on 11th July, a week from today.
Photo Janet Beegle
Background information :
Manufactured in Friedrichshafen, Germany by ZLT Zeppelin Luftschifftechnik GmbH & Co KG
"Founded in 2007 in California, Airship Ventures, Inc., operates the only passenger airship operation in the United States, featuring the Zeppelin Eureka, the world’s largest airship. The Zeppelin’s spacious cabin comfortably accommodates one pilot, one flight attendant, and 12 passengers, with panoramic windows, an onboard restroom with window, a 180-degree rear observation window and “love seat” that wraps the entire aft of the cabin. Using helium for lift, and vectored thrust engines for flight, Zeppelin NTs have been flying in Germany and Japan since 1997.
Two lateral and one rear engine provide the flight control and propulsion for the airship. The three engines combine to produce a maximum speed of 78 miles per hour, with typical cruising speed of 35 to 40 mph. Because the engines are mounted far above the cabin the passengers experience low noise level. This position is also responsible for the high performance maneuvering capabilities."
In addition to the Moffett Field base, there are flights from Monterey, Los Angeles and San Diego. European flights start at Friedrichshafen, which is near to the southern border of Germany on Lake Constance (Bodensee).