Last week Peter Thiel (ex PayPal) talked at the NextGen conference about the investment perspective he uses for the Founders Fund.
He's interested in things which can make a big difference - if you are starting a company, you are taking so much risk that you might as well try to do something big. Pick a contrarian field, rather than dive into the current bubble. They are interested in artificial intelligence, robotics, space, and next generation bio-technology, among other things. They look for talented people passionate about changing the world. Only some thinking should be outsourced to others. Companies with a complex sales cycle are underestimated.
All of this is encouraging for someone who has made a habit of looking out for the ideas and technologies which are likely to have long term fundamental effects, as an indicator for where to invest time and attention. It feels very different from the 'chase the current meme' which appears to drive many investors' decisions.
Talking of current themes, Brad Feld said today he's seen several proposals for 'the Quora of X' - people taking advantage of the meme for curated Q&A to use it as an explanation for their latest idea.
Quora is one of the better Question and Answer forums - I joined it last August. To get started, one can follow Facebook friends, Twitter followers, and/or topics. Anything one writes is public - so over time, answers accumulate into something that looks a lot like a blog. There's some thoughtful design keeping the pages simple in appearance, while linking back to the original questions to preserve context. There's an accumulation of reputation, too, as logged-in readers can vote up and thank for answers. So far, it's not obvious what the business model will be (and the question has been asked).
Robert Scoble, who tracks interesting technology trends and write about them, recently discovered Quora, and is enthusiastic - when Robert is enthusiastic, several thousand of his followers know all about it, and quite a number of them joined Quora too. It seems to have survived that onslaught.
Roger Ehrenberg, at IAVentures, has been writing long form answers in the Venture Capital and Startups topics - a couple of days ago he asked (on Twitter) whether he should cross post his answers from Quora to his own blog. Quora makes it easy to send a post to Twitter pointing to a new Quora answer. Adam Lasnick, who's a Google Webmaster, has a good list of pros and cons.
Like him, I conclude that the visibility, associated possibility of helping more people, and ease of use outweighs the disadvantages - like having much less visibility into who sees your writing. You note, however, that I'm writing this as a blog post on my own site - it's not a question, and some of the audience for my blog hasn't (yet) found Quora ...
Stackoverflow is a longer established Q & A site, much more tightly focused for programmers - reputations built there are an obvious target for recruiters, as Robert Scoble pointed out yesterday. Hunch is also driven by questions, and gives answers - initially it looked like a trivia game, but has become more obviously a way to recommend products by matching people's preference patterns.
All examples of crowd sourcing - building a site where the users of the site build the content and the value. Facebook are past masters at this (and Goldman Sachs agrees, given its $375m ($75m to Digital Sky) investment). We are being their digital peasants. Time to build our own castle ?
My purpose for attending SF MusicTech Summit was to explore a different part of the technology business. My background and experience is mostly in the plumbing of the Internet, in the switches and valves which connect the bit streams, both to make them go faster, and in filtering out the evil bits. The music business, like many other application areas, assumes that all of that infrastructure exists, so that their customers can get any bits they can explain that they want. To them the network really is like electricity and water - available for a small fee, and they need pay no attention to how it is constructed, since their bottlenecks are not the network nor anything in the computing infrastructure. Their challenges are finding people who want to find them - competing for attention, which is a scarce commodity.
Hardware Demos :
Yobble showed a device which connected a movement sensor in a guitar pick to an iPhone - making it possible to get an 'air guitar' effect, either modifications to existing music or freeform new music - using 'buttons' on the iPhone for note selection. Target market is people who have iPhones and enjoy Rockband.
HRT played another device - this one connects to an iPhone on one side and a good quality entertainment system on the other. The embedded Digital to Audio converter bypasses the DAC in the iPhone and produces audiophile quality sound from a box about 6" x 2 1/2" x 1" ; there's a USB version too, prices about $199.
The Developer Platform panel discussed what it takes to get your app noticed - attend Music Hack Days, and/or engage with Appbistro - Ryan Merket, formerly of developer relations at Facebook, was articulate and passionate about the potential for music apps on Facebook.
Most interesting was the last panel, What Music Companies need from Startups, chaired by Ian Rogers. Ethan Kaplan (Warner) and Aaron Foreman (Universal) were there from the big labels - they were quite specific. They want an app, app keys, some Python or PHP - and a clear demonstration of the technology behind the proposed solution, along with a demonstration of how it adds value to what the label is doing, and a way to measure that it is really doing what it claims. Don't expect to get a hearing unless you have a test plan ! Ian Hogarth (Songkick) and Rachel Masters (Red Magnet) were all singing the same story - it's all about the musicians and the fans. It all sounded a little forced - the music is a product, like many other results of creative talent.
There was remarkably little live music actually played. There was a grand piano with a cover on pushed under the stairs beside the downstairs toilets in the hotel - after the sessions finished two people had opened it, and were playing, prompted by chord progressions on a iPhone propped on the music stand.
Other people who know much more about the music business blogged about the conference too.
Several weeks back Brad Feld (Foundry Group) and David Cohen (Techstars) started a book tour to promote their book - Brad writes about the process on his blog. One of the first events of the tour was held at Silicon Valley Bank's Palo Alto office - Brad and David talked a little about the writing exercise, answered questions, and passed out copies of the book to attendees. Last week David Cohen made a polite follow up request for reviews, so here is mine.
Brad Feld is a Managing Director at the Foundry Group (Boulder, CO), a venture capital company, and with David Cohen founded TechStars. Techstars is a 13 week mentorship-driven seed stage investment program which runs several times a year, in, now, 4 US cities, having started in Boulder. The book is a lot like a collection of blog posts - it draws on the experiences of the people who attended Techstars, and on the people who mentored and invested in them. If you've followed me @vcwatch on Twitter, the intended audience is the same - people starting companies, people operating small technology companies, people looking for investors, investors looking for teams and ideas, particularly seed-stage companies.
The contents include how to identify ideas which might turn into a product, how to tell when you should change your mind about what you are doing, finding a co-founder and a team, measuring what you are doing, fundraising, legal, and how to have a life as well as start a company. If you intend to apply to TechStars, you should read this book, since the other applicants will have, and it gives you a head start on the process.
TechStars, and this book, are as Brad acknowledged about a very specific kind of business - one where the customers can be reached via the Internet, and where it is possible to get to a point where value is demonstrated in less than 3 months. It's not clear whether the mentorship and small amount of funding model applies to businesses which require a larger scale implementation to tell whether they are viable. Biotech and cleantech businesses, for example, or anything which requires new ASICs, take much more money and much more time to show a return.
If your first books about business were 'A Passion for Excellence', Tom Peters, pub 1984, and or 'Thriving on Chaos', Tom Peters, pub 1987, and you haven't been paying attention to very small startup businesses in the last 5 years, you can catch up by reading 'Do More Faster' - and by thinking about what is implied for the dynamics of business by the kind of companies described here. The ideas are sometimes very limited, and the company lifetime short - some of the companies funded in 2007 at the first TechStars have been acquired and their founders are investing in other newer companies.
Another group who should read this book are people working for governments which would like to see the same kind of small company ecosystem described here in their jurisdiction - and they should notice that the book says nothing at all about government assistance. There's a short section on the choice of state law in the US (found the company in Delaware, not any of the other 50 states), and on picking a lawyer. There's no government support mentioned, nor required. Implicitly, it is fairly straightforward to found and operate a company in the US, and if it isn't at least as easy to do that in your jurisdiction companies will want to move away from you, diminishing tax take and the possibility that successful founders will do the trick again, while employing people who might also catch the company formation bug.
The book has a decent index, and has very few typographical errors - unusual for first time authors.
It is worth the time to read, to get perspective on the kinds of ideas which have been the kernel of new companies, and the accelerated process by which those companies fail and succeed.
Cunning Systems evaluates product and service ideas in computing and communications. If you would like to discuss an idea, contact us at firstname.lastname@example.org
Content Networking, or Content-centric Networking as PARC are now calling it, is an architecture which Van Jacobsen has been promoting for some years now. It's worth attention, both because of Van's reputation and because it's a very interesting answer to the question "knowing what we know now about what people want and what capabilities can be provided, how should networking and computing be structured ?"
"Content-centric networking is PARC's vision for taking the next step in data communication — a change in network architecture to make content retrieval by name, not location, the fundamental operation of the network. Our approach is to reuse and build upon successful features of TCP/IP, with the key change of replacing the machine-oriented IP model with a named content model as the basis for the central protocol that connects networks."
If networks work natively in terms of content – in other words, the name a user would express to the network to retrieve a piece of content refers in some way to that content item itself, rather than to some specific host on which the content can be found, then users can securely retrieve desired content by name, and authenticate the result regardless of where it comes from. (Refer to the paper on Securing Network Content)
Pere Monclus, with whom I used to work when at Cisco, talked at PARC a couple of weeks ago about how difficult Cisco finds it to implement support for new stuff like this.
23 Nov 2010 - Updated
As Named Data Networking, there is NSF funding for further research into CCN. The PARC technical report NDN-001 (Oct 2010) describes the project in more detail.
PARC's content-networking page Discussion between Van Jacobsen and Craig Partridge ( free, despite being at acm.org)
Securing Network Content pdf
Other NDN resources
Ray Ozzie's 'dawn of a new day' post, on leaving Microsoft, is worth reading if you haven't already seen it.
It's a good snapshot of a broad set of things going on in the computing and networking space. There's a useful model for understanding how many of the important components being built will fit together :
" a world of
cloud-based continuous services that connect us all and do our bidding, and
appliance-like connected devices enabling us to interact with those cloud-based services."
He predicts that this is the correct model for the next 5 years: particularly, that apps and browsers will enable simplicity for the end user, so that they really don't have to understand how the car works in order to drive it.
Most of what he's saying could be regarded as obvious, though it is worth noticing that he thought to say what he has to the executive staff at Microsoft ...
He explicitly calls out embedded devices as a subset of connected devices.
My thought is that there is an opportunity here, to create " appliance-like, easily configured, interchangeable and replaceable without loss" devices. The design and software requirements for such things, though more relaxed than they were in the first era of microprocessor process control, require a more precise discipline than is necessary for application software to be directly used by humans. There will be constraints on available power, space, operating temperature, network bandwidth, and connection capability. Toolsets to support development of this kind are still expensive and complex compared to normal software development environments.
Hardware platforms exist, and prices are dropping rapidly, driven by the huge volume requirements of the low end of the mobile phone platforms.
For an example confirming the other part of the model, look at the growth of Google's traffic as seen from the public Internet - the majority of this is video, with Youtube as the service. Craig Labovitz (Arbor Networks) estimates Google's traffic volume as being between 6% and 12% of all the Internet traffic in the world, and it's still growing.
Ray Ozzie's letter, on leaving Microsoft, 28 October 2010
Google's traffic as a percentage of all Internet traffic
Paul Kedrowsky (Kauffman Foundation) noted the similarity of the new power grid to the public Internet in a talk to the ARPA-e meeting this week. There are now many low wattage power sources (low compared to the total power output from the major power stations) which could be controlled as a multi-cast mesh. Smart meters measure power factor - the quality and efficiency of power distribution between the substation and consumers could be improved with more granular control over capacitor banks and local disconnects.
There are some pilot implementations and trials in progress - SmartGridNews has a good list. The usual model is to use 900 MHz wireless for control system signaling and data collection - this is adequate for the volume of data generated. Other systems use the cellular wireless service. If fiber was to be pulled or blown alongside the existing power lines, it would be possible to create an additional local loop, competing with the existing telecom, cable, and wireless providers.
As previously noted, this market is at about the same stage of development as the telecommunications market was 20 years ago. Systems architecture and standards work could create at least the same scale of opportunities for innovation, while learning from the experience of developing that market.
Last week's North American Operators Group meeting was in Atlanta - without a sponsor, I don't attend the meetings in person. However, the subject matter is important, so I listen to the talks. This event had better quality audio and video than on previous occasions - the feeds available were the same, but the consistency with which they were available was much better. I used the Live HD VLC direct link at http://hidef.nanog.org:8080
Richard Steenbergen's talk on 4 October reviewed changes in the size of the Internet global routing table, February - September 2010, possible reasons for the growth, and ways to reduce the size of the table (so as to be able to carry traffic from more places while reducing the processing and memory requirements on the routers).
This is one of those topics where there are only a few hundred people in the world, if that many, who understand the dynamics of the global routing table, and they are retiring faster than new people are coming in to the business. Of those few hundred, only a handful are capable of developing new protocols, or improving on the existing, rather fragile, state of affairs.
Traffic engineering - controlling which packets take what paths into and out of ISPs - requires more specific addresses, which make more work. There are clear signs of incompetence, too - some small countries, with very few physical routes, are deaggregating much more than can possibly be useful. Geoff Huston pointed out that contrary to intuition, his measurements indicate that more BGP routes do not lead to more BGP churn. Chris Morrow suggested that there are very few tools to help do this well - more education and more protocol work would both help.
Phil Smith's BGP tutorials have background and pointers for how to get started (see the Agenda)
On 5 October, Greg Hankins from Brocade put together a wide ranging review on the current state of Ethernet .. there's a good list of 40G and 100G physical layer specifications for different distances; an overview of IEEE 802.3ax-2010 Energy Efficient Ethernet, just approved; useful remarks on the status of MPLS and OAM for Carrier Ethernet; some good diagrams describing what Cisco called Data Center Ethernet and is now called Data Center Bridging, for carrying SAN and LAN traffic on the same Ethernet; and a summary on the replacements for Spanning Tree, comparing TRILL and Shortest Path Bridging. All this in 59 slides and 30 minutes.
Brian Martin from CERN (Geneva) described how they monitor the network supporting one of the experiments at the Large Hadron Collider. Graphical representation for various levels of detail for 8000 ports is necessary, but difficult - they built a hierarchical model.
Don Lee described the experimental LISP (Locator ID/Separation Protocol - Dino Farinacci's work) implementation he's done for IPv4 and IPv6 at Facebook - despite Facebook's scale, the amount of configuration work was small, and the installation time short. Planning and design took longer.
The great majority of the presentation material is linked to the Agenda, and the video usually gets posted to the meeting archives - it's not there yet, as of 11 October.
The next meeting is in Miami, January 30 to February 2, 2011