The Entrepreneurial Thought Leader weekly speaker series has started at Stanford - last week's speaker was the CEO of Hara, Amit Chaterjee. Hara sells environmental and energy business process management. Only about 15% of their revenue comes from custom services; the rest is from the sale of their EEM Saas application. The company has taken $20m in funding, principally from Kleiner Perkins. Approximately 2/3rd of their customers are commercial companies - the main motivation for them is to save money, by understanding their usage of power and water, and minimizing that where possible. 1/3rd of their customers are city or state government bodies - their primary motivation is to be able to demonstrate that they are doing something. The inputs to the process come from reviewing utility bills, comparing usage to what is usual for other businesses with similar requirements. Where it is possible to obtain dynamic measurements from smart meters (in this business, dynamic means making a measurement every 15 minutes), they will incorporate that into their process. Siebel and SAP are competitors.
Steve Blank and Tom Byers spoke briefly.
The series is being held in an auditorium downstairs in the almost complete new Engineering buildings - as usual at Stanford, parking is difficult or expensive or both, so come by bicycle if you can.
DEMO Fall happened last week - since it was at the Santa Clara Convention center, 20 minutes away, I signed up. My impression from the pre-show material was that it was very oriented towards small software startups hoping to be picked up by investors with the eventual target customer being big enterprises - and there was a good deal of that. There were some interesting infrastructure ideas. A noticeable fraction of the audience was international. According to Matt Marshall, the producer of the show, 800 people registered, 200 of them in the last few days - my rough headcount in the main hall on Wednesday came to about 500. The hall was well filled, with enough room to find a seat if you came in late.
The agenda has links to video of the live demos given by each of the attending companies. Of most interest to me were :
The people with the best pictures were Vizerra (company name 3DreamTeam, though their business cards say 'Vizerra'). It's a software platform for building 3D representations - they've started with World Heritage sites. Their demo also had clips from a simulation done for a helicopter vendor with a rendering of the interiors as well as the flying experience. The development team is located in Moscow.
These people brought a working cell phone base station to the show - in a 19" rack mount unit about 4U high. They just provided service at Burning Man, using a 60W solar powered battery backed unit. There's an embedded software defined radio, adaptable to assorted multinational frequencies. Target price is $10000 per base station, supporting IP - enabling cellphone service for $2/month per user. There's open source software available for it, as well as proprietary supported software.
If you like the iPad format, but can't be constrained by Apple, look at the Touch Book - it's a tablet, with a robust detachable keyboard, with its own OS on an ARM processor and support for assorted Linux distributions.
Had two of its hand built prototypes working at the show. The goal is to sell a device which will sit or hang in the kitchen or family room, and provide an always on window to somewhere else, requiring no expertise from the user. The initial target audience is people who want reassurance about remote family and friends, without having to press buttons or make a call.
This is a proposal for a synchronous layer 2 framing protocol, to replace Ethernet, and a Distributed Queue Switch Architecture to use the framing. The company has far too many ideas about how it might be used. They say they are going to open source an implementation.
There were several on stage interviews. Jack Dorsey (Twitter founder) talked about what he'd learned there. Square, his most recent venture, is a payments service - so requires a very different reliability model to Twitter. It has been heavily instrumented for measurement analytics (unlike Twitter, which initially had no measurement or monitoring). Square is bringing on customers gradually as it gains understanding of its scaling issues.
Asked to say what he expected to be the next big thing, he suggested that in a year or two, preventative health care would be an interesting market.
20 years ago, large national telecommunications companies still dominated the voice and data market, despite deregulation during the previous decade. Cisco Systems was a small router company, competing with Wellfleet and Proteon.
The energy business now has distinct similarities to the telecommunications business as it was then - feeling some effect of deregulation, but still heavily controlled by assorted country and local government agencies. Emissions trading and carbon offsets, combined with subsidies for alternative fuel sources, have led to widespread installation of solar panels and wind farms, more so proportionate to population in Europe than in the US. At the consumer level, it is possible, although not economic compared to on-grid prices, to generate electricity.
The data communications protocols and standards environment was very different 20 years ago - neither TCP/IP nor Ethernet were clearly dominant (we had DECnet, IPX, Appletalk, LAT, token ring, FDDI, with Frame Relay and ATM still to come). Protocols and standards for energy network control are also in about the same state as the data communications protocols of 20 years ago - many proprietary systems, the utilities expect custom implementations to match their operating requirements, standards are incomplete, interoperability is patchy.
Last week, one year after the formation of the Smart Grid business unit, Cisco announced a partnership with Itron, which makes energy and waters meters. The following day Cisco announced it intended to acquire Arch Rock, which makes wireless sensor technology, "focusing on energy and environmental monitoring and Smart Grid applications".
"Arch Rock will accelerate Cisco's ability to facilitate the utility industry's transition to an open and interoperable smart grid by enabling Cisco to offer a comprehensive and highly secure advanced metering infrastructure solution that is fully IP and open-standards based."
Speculation - will Cisco be able to sell huge volumes of equipment to new players in the energy markets as it did to the new players in the telecommunications markets ? Or will some other, smaller, more agile, company emerge, analogous to the Cisco of 20 years ago ?
Updated to add: Cisco sees a huge market, and promotes RPL ..
Cisco to acquire Arch Rock PR
Cisco partners with Itron PR
Global market - IC article
Cisco's Smart Grid plans GigaOm
Greentech Media notices AMI implications of acquisition
Cisco on creating the Internet of Things using RPL
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at email@example.com
Musemantik is a very early stage company located in Edinburgh, building middleware which infers emotion from games during design and at runtime. Fear, excitement, curiosity and other measured emotions are used to control music, lighting and camera viewpoint to increase emotional engagement and cohesion.
There will be a demonstration of the technology at the screening sessions on 25 August 2010 at the Edinburgh Interactive Festival (Filmhouse, Lothian Road, Edinburgh)
Maciej Zarawski and Diwakar Thakore are the co-founders - combining technology and business backgrounds.
My interest in this technology stems from studying Artificial Intelligence at what is now the Informatics department at Edinburgh, combined with a long term interest in music and the potential for improving immersive experiences. Advising a team located in Scotland from a Silicon Valley base creates certain challenges - but it's no more difficult than the remote acquisition work I did at Cisco Systems.
Follow-the-sun used to be about who was working when - most commonly for customer support. Beyond a certain number of calls and customers, particularly if the topic is not complex, it is cheaper to have people working in prime time in their time zone than to run two or three shifts in one timezone. So Cisco System's first call centers emerged in Sydney and Amsterdam, so technical support would be at UTC -8 (Menlo Park,CA) UTC +10, and UTC +1 (modulo daylight savings).
There's a new use for the same phrase - computing tasks are moved to a different geographical location to take advantage of differential electrical power costs. Solar power and wind power are variable by nature. Being able to use them close to the generating source reduces the transmission losses from moving the power to where it is being used.
In iSGTW this week "Canada’s Green Star Network aims to demonstrate that by allowing the computations to follow the renewable energy across a large, fast network, the footprint of high-throughput computing can be drastically reduced."
Google already has the capability to do this, as a side effect of the overall engineering design for their infrastructure (characterized as "design for failures"). Newer data centers in some parts of the world can be built without chillers, so that cooling uses only the outside air. On the occasional hot day, the effective compute load for that location goes down, and bulk machine to machine traffic can be moved. This could be described as follow-the-cloud,to find cooling capability, rather than follow-the-sun to find power.
Moving computation to where there is cheap sun or wind power sounds like a 'why didn't I think of that' idea - until some of the necessary details are considered.
What's the actual thing being moved ? An application and all its associated data ? A virtual machine ? Using which operating system and hypervisor ? Greg Pfister has 4 posts describing hardware virtualization in detail - the memory management issues when moving a VM over a significant distance (and therefor over many milliseconds of time) bear consideration.
Bandwidth - Google owns a lot of its own fiber, so can run many 10Gbps (now, 40Gbps and 100 Gbps soon) streams on its WDMs. Greenstar will have access to the Canarie research network, which has 10Gbps on each wavelength.
Availability - Design for high availability system wide requires keeping spare capacity - for computing, for power, for connectivity. Designing a central control point rather than a distributed system seems easier, but doesn't scale well.
Measurement - technologies to measure where power is actually available, and models to price optimize its use, are in their infancy. Certain consumer and industrial use patterns vary consistently by time of day and time of year, but reacting on, say, 15 minute time scales at anything other than very coarse granularity (like turning up and down a gas fired power station) isn't something regulated utilities find easy. For computing load, Amazon AWS publishes 'service health', but not capacity.
These are just a few of the issues which will need to be addressed. Standards are in progress for moving data between 'clouds', whether private or public clouds. It is possible to quickly turn up computing capability at different data centers; since data centers typically buy power on long term contracts there is no variable price signal to use to affect a location decision.
Summary : relocatable computing to follow the sun or follow the wind could happen, particularly if the price of power stays high enough to encourage the necessary development, but there are many interacting details, in systems which are currently opaque to their purchasers. Developing Platforms as a Service (PaaS) to support relocation for general purpose computing is a long term project.
iSGTW article on the GreenStar network
Cybera (Alberta infrastructure research institution) description of its participation in and use of the GreenStar networks
Google infrastructure talk at Nanog 49
Hardware virtualization, Greg Pfister
AWS status reporting
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at firstname.lastname@example.org
Even Greylock has Valley envy - they are a very long established VC firm, who used to be firmly based in Boston. Next month they are moving their headquarters to Palo Alto.
Reason - lack of critical mass (presumably, of companies in whom they already invest and of new targets). Henry McCance, Greylock's chairman emeritus,interviewed in Xeconomy, said "Boston has virtually no important entrants in arguably the most important information technology segment of the last 20 years, in spite of the preeminent universities of Harvard and M.I.T.”
For companies designing new hardware, the Valley has a huge depth and breadth of people building ASICs, companies who can translate a functional specification into a PCB, companies who can source small volumes of components, companies who will do preliminary and or final assembly and test, and excellent air freight service out of San Jose or Oakland. There's a well established path between here and China for scaling up operations; good operations people here have many years of experience in setting up supply chains and managing remote assembly lines.
It isn't so obvious why the big software companies are here: Facebook could have stayed in Harvard, Google could have stayed in Michigan or Maryland - except that the Valley also has a great breadth and depth in software people, as well as all the business support structure - short leases for office space are normal, law firms are used to helping startups now for very little to get the opportunity for more lucrative business later.
Right now, the weather is great too - it makes many things more predictable if there is no rain between May and September. It's been cool enough not to need air conditioning, if the building can be ventilated with outside air.
Like many other parts of the world, Scotland would like to do a better job of creating companies which grow to be as successful as the big companies in the Valley. I created a LinkedIn group to discuss what might improve the business environment there - contact me at email@example.com if you want to participate. Realistically, though, the Valley has, say, 50 years head start (Hewlett Packard founded in 1939, Fairchild Semiconductor founded 1957). Now that the Internet (Cisco and Juniper combined have more than 80% market share of the core routers sold - they both have headquarters in the Valley) has made communication so much easier, the way forward needs to be to understand where Scots have a comparative advantage. Take lessons from the Valley about the business environment and be prepared to compete to keep the potential company builders based locally, or to get them to return to apply experience gained elsewhere.
Updated to add : Bradford Cross has a good historical perspective, looking back to the California Gold Rush and developments in ham radio, to explain why Silicon Valley is what it is. "Dramatic cultural change without an overwhelming catalyst or crisis is not something that happens in the span of a business cycle, it is something that happens in the course of one or more human lifetimes."
7 September - adding reference to Margaret O'Mara's article in Foreign Policy. Words of advice for foreigners looking to emulate the Valley : you need to attract the world's most talented people, and it's a global competition. Infrastructure, transport, clean air all matter. Tax breaks, low barriers to foreign investment, and encouraging immigration have been demonstrated to help.
Compared to many places, Scotland's weather is benign - although it is wetter and colder than in the Valley.
6 November - updated to add reference to the history of the Valley from the perspective of the lawyers involved. Links to an interview with Larry Sonsini. I've been a customer of both Wilson Sonsini, and Cooley. Long ago I installed routers for the London office of Brobeck, when they were Cisco's lawyers.
2 March 2012 - adding reference to Jenny Jung, @jung , on being a German visitor to Silicon Valley.
Cunning Systems is located in Los Altos, near Stanford University, HP, Google, and Cisco Systems.
The trip starts by getting on to Moffett Field - which has security at the gate, so have photo ID for each person to hand. Signposting is good - check in is at building 20. Coffee, juice and scones are available before departing - they don't understand tea. Here's the door to the building, which Airship Ventures shares with the Singularity University.
Airship Ventures have their base at NASA AMES, which is about 5 miles from where we live. Given the offer of a balloon trip as a birthday present, I substituted a Zeppelin trip. The current plan is to go on 11th July, a week from today.
Photo Janet Beegle
Background information :
Manufactured in Friedrichshafen, Germany by ZLT Zeppelin Luftschifftechnik GmbH & Co KG
"Founded in 2007 in California, Airship Ventures, Inc., operates the only passenger airship operation in the United States, featuring the Zeppelin Eureka, the world’s largest airship. The Zeppelin’s spacious cabin comfortably accommodates one pilot, one flight attendant, and 12 passengers, with panoramic windows, an onboard restroom with window, a 180-degree rear observation window and “love seat” that wraps the entire aft of the cabin. Using helium for lift, and vectored thrust engines for flight, Zeppelin NTs have been flying in Germany and Japan since 1997.
Two lateral and one rear engine provide the flight control and propulsion for the airship. The three engines combine to produce a maximum speed of 78 miles per hour, with typical cruising speed of 35 to 40 mph. Because the engines are mounted far above the cabin the passengers experience low noise level. This position is also responsible for the high performance maneuvering capabilities."
In addition to the Moffett Field base, there are flights from Monterey, Los Angeles and San Diego. European flights start at Friedrichshafen, which is near to the southern border of Germany on Lake Constance (Bodensee).
As it turned out, I didn't go to this Nanog in person, even though it was in the city. I did listen to and watch much of what was made available for remote access.
Summaries of the more interesting topics follow. Refer to the agenda
for pdfs of the slides. For more detail, see Matthew Petach's almost
Google Infrastructure team's approach to network architecture
- Vijay Gill
At the scale at which they operate, complexity kills. The overarching
mantra is "build simple things" to the extent that it's possible. In
particular, don't do complexity twice - keep the network
unsophisticated and build the necessary complexity into the services
architecture. In order to keep latency of search to users down, they
keep the majority of the search index in DRAM. A cluster of 30 racks
has 30 TB of DRAM. In parallel with the ordered results of a search, a
real time advert auction takes place - both have to be delivered back
to the user quickly enough to keep attention. Vijay polled the audience
(about 500 people at that point) - one of them, not including Vijay,
admitted to having ever clicked on a Google advert. Even given that
Nanog meeting attendees are not Google's target audience, there is a
reason for the focus on doing search and delivering advertising at huge
scale and low cost.
From Vijay Gill's presentation
They use the highest bandwidth they can to connect datacentres. Recently
designed data centers located in reasonable climates have been built
without chillers - on the few hot days, they shed the bulk, machine to
machine compute and storage load to other locations, and keep only the
locally critical systems running. To achieve this cost effectively,
they aim to run DWDM integrated with LSR (for MPLS), with the control
plane at the IP/MPLS layer. They do not need an additional control
plane at the optical layer.
As enterprises develop their use of 'the cloud', ie, outsourced data
centres, this model is worth bearing in mind. If the great majority of
traffic on the fibre is bulk IP traffic (like Google's internal
traffic) then an IP only control plane should be cheaper and simpler
than a network with control at the optical layer in addition to the IP
Remaining IPv4 addresses - measuring pollution - Manish Karir
(Merit, APNIC, University of Michigan)
There are 16 remaining /8s in IPv4. This session reviewed the traffic
volume and types on 220.127.116.11/8, which is nominally unallocated,
comparing it to 18.104.22.168/8, also unallocated. There is a background
level of 130 - 150 Mbps of traffic on 22.214.171.124/8, most of it RTP
(audio), compared to 25 Mbps of traffic on 126.96.36.199/8, most of it from
Google's IPv6 implementation - what, how, why, timeline -
Lorenzo Colitti, Google
We can't enable IPv6 for www.google.com today, because ~ 0.1%
of users won't reach Google any more, and that's too high a percentage.
We can enable IPv6 access for selected networks. Most Google services
are available - www, mail, news, docs, youtube, ...
Requirements: Good IPv6 connectivity to Google, production-quality IPv6
network, acceptable user breakage.
This is much harder than it sounds
ARIN update - John Curran
4-byte ASNs are not widely supported - 35 have been issued since 2007,
out of 426 initial requests. There will be a new Whois service on 26
June. DNSSEC is being rolled out.
ASICS - what they are, how they have evolved, how they are
built - Chang-Hong Wu, Juniper
A high level review of how computing hardware cabability has evolved.
There's a review of memory. Nothing about how particular router
features (route tables, ACLs) are affected by the hardware.
Currently, anyone announcing BGP routes can announce anything, and it's
up to the neighbours to decide whether to believe or use the
announcements. Secure Interdomain Routing group (IETF) proposal is to
use RPKI to authenticate routing updates. Prefix hijacking, or more
specific path hijacking,which route validation intends to prevent, is
usually the result of mistakes, although it could be malicious.
Slides explain BGP mechanism involved, gives policy examples,
configuration and show command lists. There is an open test bed.
Contact Ed Kern to participate.
Netdot - useful looking open source network documentation and
management tool - Carlos Vicente, University of Oregon
IEEE 802.1aq - Shortest Path Bridging. Peter Ashwood-Smith,
A detailed description of what it is and what it does - replaces
spanning tree, makes it possible to use more bi-sectional bandwidth in
data centers or metro areas. Has same goals as TRILL for ECMP, and he'd
like to see that work combined with this. Scales to about 1000 devices,
uses IS-IS. There's a worked example in slides 34 - 42.
Connecting the Farallon Islands to San Francisco - Tim Pozar,
50km, 100W maximum power budget. Used 5.8 GHz to keep costs down. On
absolutely still days (rare) there's a marine evaporation boundary
layer which bends the signal enough to take the link down.
Whatever else is wrong, it's always a cable problem - Tyler
Vander Ploeg, JDSU
Layer 0 fibre connectors need to be kept clean; they scratch if rubbed
together with dust between. There are handheld microscope probes to be
used to inspect before connecting. People who are used to copper cables
connecting servers to switches have the wrong reflexes for long haul
fibre cable installations.Every time a fibre cable is connected and
disconnected the loss goes up.Good pictures and descriptions of the
mechanics of assorted connectors.
Outline of a market mechanism for pricing the remaining IP v4
address space - Todd Underwood
This was the most entertaining talk of the entire 3 days.
Thought experiment : allocate unique addresses at high price; allocate
non-unique addresses at much lower price, based on probability that
someone else will be on same space.
Will massively extend lifespan, by allowing multiplicative reuse of
Allows for great derivative market.
There's dirty space, as indicated by the 188.8.131.52/8 talk on Monday - let
people who don't mind using dirty space use it.
There were an assortment of smart comments, with people minded to take
the notion somewhat seriously.
Last talk - Measuring access connectivity - Nick Weaver, at
They've built a tool called Netalyzr, for network debugging and
back end is EC2 (Amazon).
Tests for presence of NAT(s), looks at link properties, finds which TCP
and UDP ports are filtered,looks for HTTP proxies, checks DNS
behaviour, for IPv6 support, and for clock drift.
Nanog governance changes
It is proposed that Nanog turn itself into a separate 5013(c) - up
until now, and currently, it has been run by MERIT. Discussion takes
place on the Nanog-futures mailing list - the archive for that list has
Despite all of the anxieties about the world economy, some indicators are strongly postive. Akamai operates at least 61,000 servers, very widely distributed, supporting billions of http requests. They publish a 'State of the Internet' quarterly review, with the most recent summarizing information up to the end of 2009. From the viewpoint of their servers, they observed a 4.7% increase (compared to the third quarter of 2009) globally in the number of unique IP addresses connecting to Akamai’s network. Ending 2009 at 465 million unique IPs, the metric grew 16% from the end of 2008, and nearly 54% from the end of 2007. They also report on the changes in the bandwidth of the connections being made to their servers - global connections at rates abouve 5 Mbps increased at 12% in Q4 2009 compared to Q3 2009; connections at less than 256K bps increased 41% over the same time period.
This big increase in comparatively slow speed connections is attributed to growth in mobile connections. This week, Akamai announced the aquistion of Velocitude, which builds a platform for converting content originally intended for viewing on PC screens to content suitable for smaller, lower bandwith mobile screens. Akamai has made only 4 aquisitions in the last 5 years - it has a reputation as a having a very strong 'Not Invented Here' culture. However, according to the press release, Velocitude is being aquired for its technology.
Mary Meeker's Internet Trends report, also out this week, estimates that smartphone shipments will exceed PC shipments worldwide in 2012. Also, it reports that users' expectations of their devices are changing to expect always on access, very fast boot times, low latency access to almost all information, day long battery life and elegant design. Users do not want to have to care whether the CPU, memory and storage they are using are on the device in their hand, or remote in the cloud.
The Akamai numbers predate the iPad launch - it will be interesting to compare the numbers for the first and second quarters of 2010 to the numbers from the end of 2009.
Updated to add : Geoff Huston has instructive opinions on Internet growth in 2009, from the perspective of IP address allocation - he sees the same indicators of growth in mobile service.