Several weeks back Brad Feld (Foundry Group) and David Cohen (Techstars) started a book tour to promote their book - Brad writes about the process on his blog. One of the first events of the tour was held at Silicon Valley Bank's Palo Alto office - Brad and David talked a little about the writing exercise, answered questions, and passed out copies of the book to attendees. Last week David Cohen made a polite follow up request for reviews, so here is mine.
Brad Feld is a Managing Director at the Foundry Group (Boulder, CO), a venture capital company, and with David Cohen founded TechStars. Techstars is a 13 week mentorship-driven seed stage investment program which runs several times a year, in, now, 4 US cities, having started in Boulder. The book is a lot like a collection of blog posts - it draws on the experiences of the people who attended Techstars, and on the people who mentored and invested in them. If you've followed me @vcwatch on Twitter, the intended audience is the same - people starting companies, people operating small technology companies, people looking for investors, investors looking for teams and ideas, particularly seed-stage companies.
The contents include how to identify ideas which might turn into a product, how to tell when you should change your mind about what you are doing, finding a co-founder and a team, measuring what you are doing, fundraising, legal, and how to have a life as well as start a company. If you intend to apply to TechStars, you should read this book, since the other applicants will have, and it gives you a head start on the process.
TechStars, and this book, are as Brad acknowledged about a very specific kind of business - one where the customers can be reached via the Internet, and where it is possible to get to a point where value is demonstrated in less than 3 months. It's not clear whether the mentorship and small amount of funding model applies to businesses which require a larger scale implementation to tell whether they are viable. Biotech and cleantech businesses, for example, or anything which requires new ASICs, take much more money and much more time to show a return.
If your first books about business were 'A Passion for Excellence', Tom Peters, pub 1984, and or 'Thriving on Chaos', Tom Peters, pub 1987, and you haven't been paying attention to very small startup businesses in the last 5 years, you can catch up by reading 'Do More Faster' - and by thinking about what is implied for the dynamics of business by the kind of companies described here. The ideas are sometimes very limited, and the company lifetime short - some of the companies funded in 2007 at the first TechStars have been acquired and their founders are investing in other newer companies.
Another group who should read this book are people working for governments which would like to see the same kind of small company ecosystem described here in their jurisdiction - and they should notice that the book says nothing at all about government assistance. There's a short section on the choice of state law in the US (found the company in Delaware, not any of the other 50 states), and on picking a lawyer. There's no government support mentioned, nor required. Implicitly, it is fairly straightforward to found and operate a company in the US, and if it isn't at least as easy to do that in your jurisdiction companies will want to move away from you, diminishing tax take and the possibility that successful founders will do the trick again, while employing people who might also catch the company formation bug.
The book has a decent index, and has very few typographical errors - unusual for first time authors.
It is worth the time to read, to get perspective on the kinds of ideas which have been the kernel of new companies, and the accelerated process by which those companies fail and succeed.
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at email@example.com
Content Networking, or Content-centric Networking as PARC are now calling it, is an architecture which Van Jacobsen has been promoting for some years now. It's worth attention, both because of Van's reputation and because it's a very interesting answer to the question "knowing what we know now about what people want and what capabilities can be provided, how should networking and computing be structured ?"
"Content-centric networking is PARC's vision for taking the next step in data communication — a change in network architecture to make content retrieval by name, not location, the fundamental operation of the network. Our approach is to reuse and build upon successful features of TCP/IP, with the key change of replacing the machine-oriented IP model with a named content model as the basis for the central protocol that connects networks."
If networks work natively in terms of content – in other words, the name a user would express to the network to retrieve a piece of content refers in some way to that content item itself, rather than to some specific host on which the content can be found, then users can securely retrieve desired content by name, and authenticate the result regardless of where it comes from. (Refer to the paper on Securing Network Content)
Pere Monclus, with whom I used to work when at Cisco, talked at PARC a couple of weeks ago about how difficult Cisco finds it to implement support for new stuff like this.
23 Nov 2010 - Updated
As Named Data Networking, there is NSF funding for further research into CCN. The PARC technical report NDN-001 (Oct 2010) describes the project in more detail.
PARC's content-networking page Discussion between Van Jacobsen and Craig Partridge ( free, despite being at acm.org)
Securing Network Content pdf
Other NDN resources
Ray Ozzie's 'dawn of a new day' post, on leaving Microsoft, is worth reading if you haven't already seen it.
It's a good snapshot of a broad set of things going on in the computing and networking space. There's a useful model for understanding how many of the important components being built will fit together :
" a world of
cloud-based continuous services that connect us all and do our bidding, and
appliance-like connected devices enabling us to interact with those cloud-based services."
He predicts that this is the correct model for the next 5 years: particularly, that apps and browsers will enable simplicity for the end user, so that they really don't have to understand how the car works in order to drive it.
Most of what he's saying could be regarded as obvious, though it is worth noticing that he thought to say what he has to the executive staff at Microsoft ...
He explicitly calls out embedded devices as a subset of connected devices.
My thought is that there is an opportunity here, to create " appliance-like, easily configured, interchangeable and replaceable without loss" devices. The design and software requirements for such things, though more relaxed than they were in the first era of microprocessor process control, require a more precise discipline than is necessary for application software to be directly used by humans. There will be constraints on available power, space, operating temperature, network bandwidth, and connection capability. Toolsets to support development of this kind are still expensive and complex compared to normal software development environments.
Hardware platforms exist, and prices are dropping rapidly, driven by the huge volume requirements of the low end of the mobile phone platforms.
For an example confirming the other part of the model, look at the growth of Google's traffic as seen from the public Internet - the majority of this is video, with Youtube as the service. Craig Labovitz (Arbor Networks) estimates Google's traffic volume as being between 6% and 12% of all the Internet traffic in the world, and it's still growing.
Ray Ozzie's letter, on leaving Microsoft, 28 October 2010
Google's traffic as a percentage of all Internet traffic
Paul Kedrowsky (Kauffman Foundation) noted the similarity of the new power grid to the public Internet in a talk to the ARPA-e meeting this week. There are now many low wattage power sources (low compared to the total power output from the major power stations) which could be controlled as a multi-cast mesh. Smart meters measure power factor - the quality and efficiency of power distribution between the substation and consumers could be improved with more granular control over capacitor banks and local disconnects.
There are some pilot implementations and trials in progress - SmartGridNews has a good list. The usual model is to use 900 MHz wireless for control system signaling and data collection - this is adequate for the volume of data generated. Other systems use the cellular wireless service. If fiber was to be pulled or blown alongside the existing power lines, it would be possible to create an additional local loop, competing with the existing telecom, cable, and wireless providers.
As previously noted, this market is at about the same stage of development as the telecommunications market was 20 years ago. Systems architecture and standards work could create at least the same scale of opportunities for innovation, while learning from the experience of developing that market.
Last week's North American Operators Group meeting was in Atlanta - without a sponsor, I don't attend the meetings in person. However, the subject matter is important, so I listen to the talks. This event had better quality audio and video than on previous occasions - the feeds available were the same, but the consistency with which they were available was much better. I used the Live HD VLC direct link at http://hidef.nanog.org:8080
Richard Steenbergen's talk on 4 October reviewed changes in the size of the Internet global routing table, February - September 2010, possible reasons for the growth, and ways to reduce the size of the table (so as to be able to carry traffic from more places while reducing the processing and memory requirements on the routers).
This is one of those topics where there are only a few hundred people in the world, if that many, who understand the dynamics of the global routing table, and they are retiring faster than new people are coming in to the business. Of those few hundred, only a handful are capable of developing new protocols, or improving on the existing, rather fragile, state of affairs.
Traffic engineering - controlling which packets take what paths into and out of ISPs - requires more specific addresses, which make more work. There are clear signs of incompetence, too - some small countries, with very few physical routes, are deaggregating much more than can possibly be useful. Geoff Huston pointed out that contrary to intuition, his measurements indicate that more BGP routes do not lead to more BGP churn. Chris Morrow suggested that there are very few tools to help do this well - more education and more protocol work would both help.
Phil Smith's BGP tutorials have background and pointers for how to get started (see the Agenda)
On 5 October, Greg Hankins from Brocade put together a wide ranging review on the current state of Ethernet .. there's a good list of 40G and 100G physical layer specifications for different distances; an overview of IEEE 802.3ax-2010 Energy Efficient Ethernet, just approved; useful remarks on the status of MPLS and OAM for Carrier Ethernet; some good diagrams describing what Cisco called Data Center Ethernet and is now called Data Center Bridging, for carrying SAN and LAN traffic on the same Ethernet; and a summary on the replacements for Spanning Tree, comparing TRILL and Shortest Path Bridging. All this in 59 slides and 30 minutes.
Brian Martin from CERN (Geneva) described how they monitor the network supporting one of the experiments at the Large Hadron Collider. Graphical representation for various levels of detail for 8000 ports is necessary, but difficult - they built a hierarchical model.
Don Lee described the experimental LISP (Locator ID/Separation Protocol - Dino Farinacci's work) implementation he's done for IPv4 and IPv6 at Facebook - despite Facebook's scale, the amount of configuration work was small, and the installation time short. Planning and design took longer.
The great majority of the presentation material is linked to the Agenda, and the video usually gets posted to the meeting archives - it's not there yet, as of 11 October.
The next meeting is in Miami, January 30 to February 2, 2011
The Entrepreneurial Thought Leader weekly speaker series has started at Stanford - last week's speaker was the CEO of Hara, Amit Chaterjee. Hara sells environmental and energy business process management. Only about 15% of their revenue comes from custom services; the rest is from the sale of their EEM Saas application. The company has taken $20m in funding, principally from Kleiner Perkins. Approximately 2/3rd of their customers are commercial companies - the main motivation for them is to save money, by understanding their usage of power and water, and minimizing that where possible. 1/3rd of their customers are city or state government bodies - their primary motivation is to be able to demonstrate that they are doing something. The inputs to the process come from reviewing utility bills, comparing usage to what is usual for other businesses with similar requirements. Where it is possible to obtain dynamic measurements from smart meters (in this business, dynamic means making a measurement every 15 minutes), they will incorporate that into their process. Siebel and SAP are competitors.
Steve Blank and Tom Byers spoke briefly.
The series is being held in an auditorium downstairs in the almost complete new Engineering buildings - as usual at Stanford, parking is difficult or expensive or both, so come by bicycle if you can.
DEMO Fall happened last week - since it was at the Santa Clara Convention center, 20 minutes away, I signed up. My impression from the pre-show material was that it was very oriented towards small software startups hoping to be picked up by investors with the eventual target customer being big enterprises - and there was a good deal of that. There were some interesting infrastructure ideas. A noticeable fraction of the audience was international. According to Matt Marshall, the producer of the show, 800 people registered, 200 of them in the last few days - my rough headcount in the main hall on Wednesday came to about 500. The hall was well filled, with enough room to find a seat if you came in late.
The agenda has links to video of the live demos given by each of the attending companies. Of most interest to me were :
The people with the best pictures were Vizerra (company name 3DreamTeam, though their business cards say 'Vizerra'). It's a software platform for building 3D representations - they've started with World Heritage sites. Their demo also had clips from a simulation done for a helicopter vendor with a rendering of the interiors as well as the flying experience. The development team is located in Moscow.
These people brought a working cell phone base station to the show - in a 19" rack mount unit about 4U high. They just provided service at Burning Man, using a 60W solar powered battery backed unit. There's an embedded software defined radio, adaptable to assorted multinational frequencies. Target price is $10000 per base station, supporting IP - enabling cellphone service for $2/month per user. There's open source software available for it, as well as proprietary supported software.
If you like the iPad format, but can't be constrained by Apple, look at the Touch Book - it's a tablet, with a robust detachable keyboard, with its own OS on an ARM processor and support for assorted Linux distributions.
Had two of its hand built prototypes working at the show. The goal is to sell a device which will sit or hang in the kitchen or family room, and provide an always on window to somewhere else, requiring no expertise from the user. The initial target audience is people who want reassurance about remote family and friends, without having to press buttons or make a call.
This is a proposal for a synchronous layer 2 framing protocol, to replace Ethernet, and a Distributed Queue Switch Architecture to use the framing. The company has far too many ideas about how it might be used. They say they are going to open source an implementation.
There were several on stage interviews. Jack Dorsey (Twitter founder) talked about what he'd learned there. Square, his most recent venture, is a payments service - so requires a very different reliability model to Twitter. It has been heavily instrumented for measurement analytics (unlike Twitter, which initially had no measurement or monitoring). Square is bringing on customers gradually as it gains understanding of its scaling issues.
Asked to say what he expected to be the next big thing, he suggested that in a year or two, preventative health care would be an interesting market.
20 years ago, large national telecommunications companies still dominated the voice and data market, despite deregulation during the previous decade. Cisco Systems was a small router company, competing with Wellfleet and Proteon.
The energy business now has distinct similarities to the telecommunications business as it was then - feeling some effect of deregulation, but still heavily controlled by assorted country and local government agencies. Emissions trading and carbon offsets, combined with subsidies for alternative fuel sources, have led to widespread installation of solar panels and wind farms, more so proportionate to population in Europe than in the US. At the consumer level, it is possible, although not economic compared to on-grid prices, to generate electricity.
The data communications protocols and standards environment was very different 20 years ago - neither TCP/IP nor Ethernet were clearly dominant (we had DECnet, IPX, Appletalk, LAT, token ring, FDDI, with Frame Relay and ATM still to come). Protocols and standards for energy network control are also in about the same state as the data communications protocols of 20 years ago - many proprietary systems, the utilities expect custom implementations to match their operating requirements, standards are incomplete, interoperability is patchy.
Last week, one year after the formation of the Smart Grid business unit, Cisco announced a partnership with Itron, which makes energy and waters meters. The following day Cisco announced it intended to acquire Arch Rock, which makes wireless sensor technology, "focusing on energy and environmental monitoring and Smart Grid applications".
"Arch Rock will accelerate Cisco's ability to facilitate the utility industry's transition to an open and interoperable smart grid by enabling Cisco to offer a comprehensive and highly secure advanced metering infrastructure solution that is fully IP and open-standards based."
Speculation - will Cisco be able to sell huge volumes of equipment to new players in the energy markets as it did to the new players in the telecommunications markets ? Or will some other, smaller, more agile, company emerge, analogous to the Cisco of 20 years ago ?
Updated to add: Cisco sees a huge market, and promotes RPL ..
Cisco to acquire Arch Rock PR
Cisco partners with Itron PR
Global market - IC article
Cisco's Smart Grid plans GigaOm
Greentech Media notices AMI implications of acquisition
Cisco on creating the Internet of Things using RPL
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at firstname.lastname@example.org
Musemantik is a very early stage company located in Edinburgh, building middleware which infers emotion from games during design and at runtime. Fear, excitement, curiosity and other measured emotions are used to control music, lighting and camera viewpoint to increase emotional engagement and cohesion.
There will be a demonstration of the technology at the screening sessions on 25 August 2010 at the Edinburgh Interactive Festival (Filmhouse, Lothian Road, Edinburgh)
Maciej Zarawski and Diwakar Thakore are the co-founders - combining technology and business backgrounds.
My interest in this technology stems from studying Artificial Intelligence at what is now the Informatics department at Edinburgh, combined with a long term interest in music and the potential for improving immersive experiences. Advising a team located in Scotland from a Silicon Valley base creates certain challenges - but it's no more difficult than the remote acquisition work I did at Cisco Systems.