A new experiment

For the last two weeks I've been experimenting with writing shorter, quicker posts on another CMS which I call Bits.

I am doing this for two reasons: 1. I had gotten "stuck" (unmotivated, tired, etc.) writing longer pieces here (more on that here) and wanted a general reset on little thoughts that don't aspire to be much more than that. 2. The creaky software that runs this entire site (blog+general area) I wrote in 2007 while on vacation listening to the launch of the iPhone over a crappy stream. Thus it did not anticipate things like smartphones and tablets as a way to write things down quickly. Over the years I toyed with an Evernote "client" but it just never fit my workflow. On top of it RSS died somewhere in there (a very sad thing) and Twitter replaced it as the way to push out updates (or maybe Twitter and the age old practice of spamming people with email). In short, a lot has changed in eight years.

I'm not ready to shut this down as I hope that screwing around on Bits gets me inspired to go back to the longer pieces. But if it is all crickets and tumbleweeds here for a while, here is where I will be.

Comments Permalink

(Note: Republishing this from a guest post over at TechCrunch )

If the past couple of years have been about one theme for me investment-wise, they have been about exploring the bridge between bits and atoms with a series of bets aimed to make a path between the digital world and the physical one we populate. And no, I am not talking about ordering a pizza from my smartphone or getting a maid-on-demand to come clean my house within an hour, but rather literally turning the bits on your screen into something you can touch, or conversely, the very room you are reading these words in into a digital model that you can inhabit.

I am referring to the worlds of 3D printing and virtual reality. Having made early investments in both the former (MakerBot, MarkForged) and the latter (Oculus VR), I am often asked when I decided that “hardware was hot” (short answer: it is not and will be one of the first investment categories decimated when the current cycle corrects). The question many fewer people have asked is: what connects 3D printers, virtual reality, augmented reality, and drones to a possible investment thesis and what might we expect to see emerging as important sub categories for investment over the next year?

Up until now, it’s been all tailwinds for the entrepreneurs and companies looking to make stuff happen in 3D. Recently though I’ve come to worry that the two tentpole categories (virtual reality and 3D printing) are about to enter the “trough of disillusionment” as known in the Gartner Hyper Cycle. This is an unfortunate but necessary period during which expectations for the new technology and the reality of what each can deliver today can cause people to doubt. What follows is one way we may get through this trough a little bit faster.

First, a little context. The reason this bi-directional bridge between bits and atoms matters is because of the large multiplier effect I believe this will have on getting back to solving the world’s big hairy problems with technology. Big as in energy, health, education, and infrastructure (the kind you drive cars on). As Neil Stephenson has written in the context of the decline of aspirational scifi, we’ve spent 20 years inventing, exploring, and wallowing in the Internet and, as history’s most democratic source of information and channel for human collaboration, I’m certainly not going to complain. But much like the flatlanders from the story Flatland, we’ve been stuck in a 2D world, with our collective imagination for what computers can do crippled by the obsession with the next app or newly emergent social phenomenon (excluding of course a couple of million mechanical engineers and high-end game designers who need our collective help given the magnitude of the problems they can now tackle).

Just two examples that make the point (though there are probably a dozen really compelling ones):

  • Education and MOOCs: we’ve put the power of a global network to democratize the classroom into effect by offering postage-sized resolution rectangles for instruction and ETS-like multiple choice questions for assessment. And yet we wonder why statistics courses are abandoned faster than Javascript frameworks on Github. How about using some of the 3D immersive capabilities provided by the Oculus SDK to simulate the power of probability in simulations or to model physical phenomena with calculus? The VR scape is being nibbled at from the edges by folks looking to develop immersive training experiences, but as I’ll argue below, we are far away from being able to move beyond the Youtube-ification of the classroom with today’s tools for the bulk of the most creative educators.

  • Mass customization: we are all unique snowflakes when it comes to the shape and size of our bodies and yet we live at the bottom of the industrial machine funnel for everything from the shoes on our feet to the prosthetics we use to see, hear, and move in the world. There have been good economic reasons for mass production but those economics are changing, and outside of truly bespoke prosthetics, we are only now scratching the surface of what is possible when you take distributed talent, insight, and experience and marry to it the means for low volume manufacturing.

While there are “success stories” for both these types of companies that grace the pages of our favorite techno-utopian blogs, looking beyond the headlines you see a recurring pattern: “Professor from XYZ university and team of slave grad students puts together a barely workable demo for more grant funding.” From this, the smell of the future does not emanate, even if William Gibson was correct in his claim that the future is here but just not widely distributed.

In order to move beyond demos, what this 3D world desperately needs is its Hypercard moment. And by that I mean a 3D authoring/simulation environment for the rest of us; one which ideally comes with about as steep a learning curve as Hypercard, and which treats both the bits and the atoms as two ends of the same continuum-- not two totally separate worlds.

Tinkercad is closest on the 3D printing side— a web-based CAD tool that almost anyone can master with a little patience but has been bumped around enough to be stripped of product vision beyond aping CAD in the browser (I do love how both of my children can use it expertly though, so it has been truly a force for good in CAD).

On the VR side, Unity and its associated ecosystem has done an admirable job of getting us out of the cycle counting “hardcore” age of performant game design. But this is only because of the starting point. It is hard for me to see the school teacher from Peoria with a better way to teach fractions sitting down to make a VR experience when he/she would need about a year’s worth of domain-specific knowledge to get there. If the Internet has taught us anything, it is that good ideas come from anyone and anywhere, so expecting the “experts” to give us the next Minecraft for the 3D world is unlikely to happen.

And while I am on Minecraft, let me end there. As far as 3D environments go, Minecraft is the LOGO (if not the Hypercard) of this generation. Through its simple model of crafting blocks, millions of kids (and adults) have recreated extremely complex real world environments. Minecrift was in fact one of the first Oculus demos I ran, and this MIT project showed great promise to going in the direction of the virtual-physical bridge mentioned above. Teachers have taught classes entirely on Minecraft -- both because it is easy to learn and incredibly powerful in terms of what it can express. And add the fact that its new owner has also announced a fairly impressive offering in VR/AR with Project Hololens and the possibilities become mind-bending.

Still, we can do more. What Hypercard did that was revolutionary was to engage not just gamers or musicians or even programmers, but all sorts of professionals, each of whom saw the elephant from a different vantage point. For some it was an everyday database, for others a powerful multimedia tool for storytelling. In much the same way, the 3D creation/modeling environments of tomorrow will not be just game engines or CAD tools, but software environments where we can ingest, modify, and output the very atoms needed to get after the big problems technology will solve over the next 20 years.

Comments Permalink

This article on Cuban kids building an alternative Internet to connect to each other given the straw-thin connection the country has to the real Internet is straight out of science fiction. For $2K per node, 9,000 nodes are coming together to assemble one of the world's largest LANs right in Havana and they are using it to connect, share, and generally communicate.

It's a good reminder for the rest of us that almost the entire layer cake of Internet protocols from the MAC layer on up can operate in a fully distributed and decentralized way. In an era when everyone uses the word "cloud" liberally to mean "centralized—" whether it be for storage of critical data or for compute cycles— examples of practical decentralization (beyond the 7 TPS possible with Bitcoin) are worth contemplating. Not because there aren't great benefits to running compute centralized in a managed fabric that can scale indefinitely but but because there is an equal if not greater amount of power in knowing that the same technologies can be used independently of the big "stacks" looking to monetize eyeballs or device upgrades and ransoming your data back to you to that very end.

I'm not sure where decentralization goes as far as disruptive technologies that businesses can be built upon but here are a few areas that I think are fertile enough to explore:

  • What does a decentralized sensor network look like in an era where we all carry at least a half dozen sensors within the smartphones in our pockets every day? Can a mesh of these sensors provide localized intelligence for making interesting decisions about things beyond commute traffic? Tornadoes in the midwest of the US and barometric pressure come to mind as does ambient noise and impending danger in urban areas.
  • How does connectivity decentralize if we presume no readily accessible carrier infrastructure? Can devices move to a store-and-forward model for data packets, waiting to be backhauled via WIFI to the Internet? One can imagine all sorts of new applications emerging from the lack of $5/10/20/40 per device per month connectivity tithe, especially if married to suitably low-powered radios.
  • What does it mean to decentralize compute? Already we have seen that the limits of Moore's Law are being addressed with multiple cores on the same server, but if we split workloads among the cores embedded in our phones and other everyday devices, are there inherent advantages to pushing computation to the very edge? [I am way over my skis here] Could one could imagine a series of deep neural networks whose inputs and outputs are made up of distributed processors that aren't living in the same datacenter and therefore do not work as part of one coordinated whole but a bunch of distributed network elements in the real world? There might be no application gains from an architecture like this but how will we know until we try?

I'm not sure that decentralization as a theme works in IT as it seems we've spent the last 30 years trying to put the horse back into the mainframe barn in every domain except for the one that renders the interface at the last mile (the GPU, the capacitive touch screen, the VR/AR glasses, etc.) but as our friends south of Key West remind us in the aforementioned article, it might be something worth exploring.

Comments Permalink

We are living in a multi screen world and I don't just mean the size differences from the watch to the iMac, all of them woven together through a layer of notifications that inform us of when we're expected to next check Twitter or like something on Facebook.

No, we are living in a multi screen world that is going to get especially interesting when innovative entrepreneurs start to think of how it might be possible to weave the various different pixel farms together to make us more productive and potentially even to enable new modalities for collaborating or even- creating- in a way that only belonged on the pages of a science fiction novel when the Xerox PARC guys were birthing the interfaces we've all come to know and love over the last three decades.

I don't know how this is going to shake out exactly but I'd bet on a lot of interesting multi screen use cases in the mainstream before I'd bet on AR/VR or even before I'd bet on the demise of any of the existing screens (yes including the desktop). The lame use case is the "second screen" pushed by the cable companies with their smart settop boxes where your mom can call in the middle of the football game which you can pause/DVR from your smartphone. Better still are Handoff/Continuity, but only slightly.

The really interesting stuff will come from new interface modalities where touch surfaces are combined with the precise work that a mouse and a huge display enable. Or where the sensors on a smartphone inform the way content comes together as a group of people work collaboratively. There are so many science fiction use cases, most of which will probably die on the vine, that I'm not even going to enumerate them here. But they will definitely make things more interesting for creators and collaborators alike and I for once can not wait.

Comments Permalink

On the new iPads

Apple released its new iPads yesterday and they are totally boring, which is in an of itself not a huge deal. The bigger worry is that the new family of SKUs, covering every price and size from $250 to close to $1000 reminds me of the kind of tasteless shelf stuffing that took place at HP during the years I was there for both PCs and printers.

Here is how it would go: some senior exec from one of the big channel partners (Costco, Staples, BestBuy, Tesco, etc.) would show up with a sales report or market study claiming that the price point between $399 and $499 seemed particularly fertile for some sort of compromised laptop and BAM! a project manager would be assigned to sort out what components to take out of some existing device so that the BOM (bill of materials) would allow a product to exist- totally based on rear view mirror data about sales purchases by the ants crawling through the shelves of the Costco late on Friday night somewhere between the cheeseballs and the lawn equipment.

Apple going this direction is no surprise given their lack of product leadership- adding small features to the rapidly exploding matrix of SKUs and managing product releases to Wall St expectations. The bigger problem though is the way that the ipad third party ecosystem has done so little to invent new experiences in the form of apps that drive the adoption of new and better devices. Almost every developer that I talk to is much more excited about working on the iPhone platform than on the iPad platform and it is a bit sad because in the absence of the iPhone stealing all of the thunder, the iPad would have been, in Alan Kay's words, "what the personal computer should have been."

Without new apps, the iPad will die a long slow death of mediocre corporate decisions filling holes in the product matrix. As I write this on the new version of Drafts 4, with Prompt 2 and Pythonista being the only two apps that have gotten me excited in the last year, I'm not sure we will get there- a purely new class of app targeting the large glass screen, the constant connection to the Internet, and MIPS that are much more about the GPU than the CPU. All of this should make buyers of the new iPad Air 2 feel like those early Apple ][ pioneers who bought it just to run VisiCalc, Star Blazer, and PrintShop.

If you've got one of those apps, I want to talk to you...

Comments Permalink