Featured Post

Bits, Bytes, Perspectives and Prognostications

Let me start by welcoming you to my writings.   I don't claim to be a technical writer, but I have certainly paid my dues in words and ...

Wednesday, January 31, 2018

Connected Next Century - Wireless


No discussion of the future could be taken serious without a discussion around wireless.  Of all the things I am going to say in this entry, this is likely to be the least controversial.  

Wireless is an area where much is going to change in a relatively short amount of time.   And those changes are going to be more reactive to consumer trends and new competitions.

This topic is a very broad subject, so for this treatment we are going to stick to Wi-Fi and cellular mobile architectures.  Unless you are a member of IEEE or 3GPP, they can seem complex or even intimidating.  So let's break it down a bit, and start with the idea that there is benefit in understanding the how (we got here) before we can discuss the what (is likely to come).  Wi-Fi has been around as a stable WLAN architecture for over 15 years and mobile broadband has been tolerable since the release of HSPA (3G) in 2010.  But there is much still to be worked out, in both the licensed and unlicensed frequency space.  

Honestly speaking Wi-Fi still exists as the wild west.  Consumers are frustrated, and they don’t have an easy off the shelf solution, they are looking for something that just works.  The device (access point or AP) market is dominated by the service provider (monthly lease) with their single AP fits all model.  The retail market has seen a recent batch of mesh solutions from old and new players, with their set it and forget it hardware.  The demands from large MDU’s (multi-dwelling apartment style building) are pushing the limits of congestion and collision, and every vendor and service provider is looking for that magic bullet (solve the physics issues).  The number one type of service call for the service provider is still Wi-Fi and connectivity issues, and if the solution was about hardware, there would not be a problem anymore.  In response most of the major service providers are looking at some form of Carrier Class Wi-Fi (CCW).  Emulating the core features of mobile networks (carrier) that manage multi-AP, roaming, device management, authentication, SON, recovery, quality and ultimately packet core (combined data and voice on the same network). CCW is this idea that the right number of features and devices can efficiently deliver the Wi-Fi services providers need and the quality consumers want.   But, in order to get there, to build and sell CCW; critical pieces required to offer a differentiated service that meets the wide and varied needs of diverse installations requires interoperability and increased investment.  By contrast, much of today’s commercially available Wi-Fi is offered as best effort with no expectation of quality of service.

On a parallel technological track, the mobile network operators (MNO) have been developing 3GPP based solutions under the family name of 3G/4G (LTE) and working on 5G.  Consumers have been able to take advantage of bigger and bigger pipes of faster throughput for their mobile devices.  As the MNO’s upgrade technology approaches the next generation (5G) to deliver increased coverage and capacity, they are in a unique position to disrupt residential broadband.  This however is predicated upon them adapting their billing models to favor unlimited, or significant usage caps, to sustain what customers are expecting.  What they are building is new broadband market competition for consumers as fixed wireless broadband (FWB).  This is more than just Wi-Fi vs LTE in the home, it’s the enabling of future consumption anywhere and anytime.

As with all good ideas, 5G places the industry at something of a crossroads: 4G and LTE-A, a variant of LTE, have many years of life left to deliver value for the existing mobile device market, but there is growing acknowledgement that the capacity limitations within LTE means that delivering broadband at scale is going to be a challenge.  5G therefore is becoming widely recognized as an evolutionary step for a new market of fixed wireless broadband.  The future of FWB is becoming clearer by the quarter, and while FWB is a threat to the DSL and PON (CPE) vendor, the market is huge, and the competition from incumbent and new is going to be fierce.  The Telco, the Cable, and even the Mobile provider are all targeting the same consumer, who today is a customer of probably at least two of these respective companies.   How we navigate these waters that are being churned by the biggest of boats, spending billions to upgrade and compete to be the sole service provider starts with understanding 5G and fixed mobile broadband and matures as we build a new 5G service and skill set to pick up the challenge.

Benefits of OFDMA - Virtual Network Slicing
We should come back to the present now and discuss the future of wireless. In 2018/19 we are going to see major SOC releases from chip OEM’s in Wi-Fi called 802.11ax and 5G, called new radio (NR).  But what is transformative about both of these technologies is their similarities – considerable improvements in spectral efficiencies enabling new high-density deployments.  Both introduce enhancements to MIMO, MU-MIMO and introduce OFDMA at scale.  I don’t see this as another WiMAX vs LTE competition, primarily because I see AX and NR as more complimentary in their use case application.  AX is not likely to become an access (last mile) solution and NR is likely to be constrained to frequencies that won’t penetrate external walls for the foreseeable future. In other words they create better consumer wireless solution combined.  Not coincidentally, we know that IEEE and 3GPP have been working together to align Wi-Fi with cellular.  The future of wireless is more about slicing and virtualizing the network to communicate with many different devices at the same time - like IoT and mobile and streaming TV - and OFDMA is the key to making this happen for both Wi-Fi and NR.


Now for the controversial.  If Wi-Fi is the home turf of the service provider with their fixed wired broadband outside the home and Wi-Fi inside, and mobile is creating new fixed wireless broadband outside the home and a combo of Wi-Fi and LTE inside, what is the future.  This wired vs wireless competition is about quality vs class of service.  Consumers have never felt a loyalty to their broadband provider, they just want the service to work and be available anywhere.  That's a challenge as a cable customer when you're the gym and want to stream the latest Netflix episode.  You're Wi-Fi only works at home, your mobile account has a few gigabit limit each month and the gym Wi-Fi is terrible or nonexistent.  Some carriers do offer free Netflix, but that doesn't help with Prime or Hulu or the next streaming service.  As NR becomes ambient, a carrier with a class of service of unlimited streaming inside and outside the home is going to attract cable customers away to a single provider bill.  Cable knows this is and is now investing heavily into more public Wi-Fi, more MVNO, and possibly more consolidations or mergers. 

Monday, January 29, 2018

Connected Next Century - Edge

As technology reminds us, not all trends follow the same path.  While the press like to announce that public cloud is dominating compute and storage and that the remaining holdout platforms continue to transition from on-premise in favor of these same public clouds.  But not all compute exists as SaaS or PaaS, running on IaaS, and not by a long shot.  Today (2018) over 2 billion people have a smart phone worldwide. That means that 2 in 7 people on the planet have a phone with significant compute and storage, and being a smartphone means people are more likely than not to store media and content that is not necessarily in the cloud. This percentage is expected to grow and at rate that parallels the increases in smartphone SOC innovation.  Smartphone buyers are not typically one-and-done buyers.  This explosive growth in consumer electronics in our pockets is an untapped form of cloud compute, one that is going to change how technologists think about the edge. 

As compute power has evolved, so has the need for high speed broadband to meet changing consumers’ expectations – to parallel pop culture references like if you build they will come, but also when do we want it – now!   New consumer demands like OTT video and IoT are increasing throughput demands on existing networks, but new devices are generating consumer electronic ripples that are likely to rewrite current experiences.  A new type of content consumption, one that can be traced back to the 3D movie and video and more recently gaming and consoles.  Multiple industries, that previously have had little interaction are now co-developing and investing billions into the next breakout device.  What has been called the head mounted display or HMD, is promising to revolutionize the world, not just entertainment. These new immersive devices are limited more by our imagination than current technological capabilities. An entire new science for creating VR and AR content, physiological adaptations, new hardware, haptics, new screens, and cheap wireless are creating this new market.    In order to both maximize and monetize the experience; these new technologies are blazing a trail.  By immersing into the content instead of just watching, the traditional idea of visual effects are undergoing an adaptation to a new virtual and augmented world.

The primary limiting factor today preventing meaningful virtualization in the VFX world is latency.  The laws of physics tell us data cannot travel faster than the speed of light between the core and the edge, therefore something else needs to give. One dominant idea is that compute will undergo a shift to split between the compute of image processing at the core and the rendering of images at the edge.  The future of the CPE, the expectation and demand, whether a wearable, optical, a 2D wall of light, or even embedded implant; compute will need to increase to offset latency.   By separating the compute intensive processing from the final image rendering, the data can be stripped down to a metadata stream that is a lightweight stream of parts, capable of representing the images in a way that can be assembled (rendered) independent of the bulky source, but at the edge (where the consumer is at that moment).  This opens up the artist to move from the 2D desktop to the XYZ axis of the workspace.

Whether this is watching theatrical, episodic, user generated, or gaming, the traditional television is becoming obsolete. The debate at the moment is what is that edge compute device; is it the smart phone, the HMD, a console, a STB, or something new.  What is likely to be true regardless is that edge compute is going to take lessons, technology and even architecture from the cloud.  Today we are seeing containers, a traditional cloud and infrastructure technology, showing up in CPE middleware (OS), including Amazon’s own Lambda (GreenGrass) and LXC from Linux.

Tuesday, January 23, 2018

Connected Next Century - Business Lessons

New AE’s are created every day in the form of applications that resemble building blocks with connectors that allow interactions and exchange of information that makes the sum of the blocks greater than the parts.  Each block represents a service that when chained together creates a system that can accomplish anything a creative mind can dream up.  The interfaces, the connectors, the API’s are the future of compute and are the backbone of the service catalog.  By stringing new chains of services together, companies are building the future DNA systems of markets and industries that will enable and empower machines to learn and adapt. This brave new world represents the growing overhyped term of the platform, but it is how the online future will be delivered.  Knowing how to link a UI/UX, to a set of algorithms formed into an application, to a virtual compute, database, and storage, and do it repeatedly is the new digital norm.  Often the end user doesn’t understand the intricacies of the components, but they do know when they are not working or behaving as expected.  The lists of successful platforms, when examined looks like the list of startup unicorns. 

To take an idea, to grow and expand, to survive the hype curve and reach scale and profitably takes patience, vision, and smart people who can see beyond the pitfalls and learn from the missteps.  But this is not the real lesson, there are many companies who have shown they can reach productivity, and people who can repeat the exercise over and over again.  The lesson is that success starts from an idea, a service, an innovation.  It is the successful company that tries to repeat as a sophomore by jumping ahead of steps, whether hubris or expediency or economics, these attempts too often fail.  Gall's Law teaches that "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."  No one gets to skip steps when bringing a new innovation or disruption to the plateau of productivity.   This is where unicorns are bred, ideas are repeated over and over and over again until they are right, not perfected, but right.


Connected Next Century - Gravity


Data Gravity is already here


Let's start with a question - when you hear the words "its virtual" do you envision the subject of the question as being unknown and potentially scary or enticing and you can't wait to hear more?  This is question is as much philosophical as it is existential. You may be asking what does this have to do with gravity and the future of compute? 

Basic primer - In the early days of compute, deployments or installations were physical by their nature, purpose built or bespoke with both hardware and software, and there was a one to one relationship between machine and application. It was a long time hence from the ubiquitous term of service we use today. New applications and infrastructure took weeks or months to roll-out, lots of signatures, and business model justifications. I for one, am glad that is the past. A service as we use it today, is the idea that a user experience, say a mobile phone app, or a web site, or even a smart TV, is being enabled or powered by a number of services that are glued together (integrated) via a superset of software controls (platform) running on a farm of machines, often many miles and milliseconds away.  

Over the last half decade, we have seen significant changes in architectures and changes in software stack efficiencies, motivated by costs and more and more by environmental stewardship. Public and private cloud providers have invested billions in demonstrating and convincing the legacy generation of developers that hardware and applications are not inseparably connected. This philosophical and mental adoption, the idea that the infrastructure is disconnected from the application, we find the new relationship bounded only by imagination. Seceding the hardware from the application means that developers no longer have to worry about maximizing hardware resources and scale and can instead focusing on more important matters like performance, security, and enhancing the network communication language (application programing interface) API’s.  This idea of modularity, many micro-services, each designed to do one thing, and do it well, all running alongside each other as peers or dependents in an environment, is often called a service-oriented architecture (SOA). The power of an SOA means that the platform and its services can run anywhere, and more and more is becoming out-sourced to an infrastructure (the physical bits) that is owned, managed, and maintained by a 3rd party company.  This complex, but beautifully simple relationship is called “The Cloud.”  The magic of the cloud is not that its hosted or managed, but it’s ability to transform the way software and creative minds work.

In the future, open source gives way to open systems, component ecologies, autonomous entities (AE); sometimes hardware, sometimes software, become the empowering tools with security controls and policies that will allow and wrap insecurities within small zones of access.  Compromising one AE will not allow transitive access to its neighbor.  Attacking or passing nefarious or even erroneous information between AE’s will either result in isolation or denials.   Old school network practices like edge firewalls, VPN’s, and air gaps are not the future of security, and reliance on will become the anchor that impedes innovation. The baseline access method today is through a virtual network layer or virtual private cloud (VPC). The future of the VPC is the wild fire spread of pervasive computing that enables speed, performance, and interaction creating a data gravity well where data becomes the immovable object and stops chasing the sun, and the talent becomes nomadic.  Public, Private, Hybrid, are just names that are used to comfort the wary and hinder the true ubiquitous capabilities of speed and performance.  The virtualization of the desktop is already upending the paradigm allowing data and compute to remain secure, and latency has become the oppressor. New business models that move desktop applications and creativity tools to the “as a service” model and are charged on time use basis are driving the fixed seat model extinct.

Discrete object level security in the cloud is advancing as metadata and blockchain are the future of security in the cloud.  Each and every data object should contain or conform cryptographically to the security standard that checks and authenticates access against an unlimited history file record that is updated continuously in real time as each object is touched.  Access is granted based on source, credential, syntax, context, and pre-determined policies. Everything is logged and anything else alarmed. Audit logs and chains of custody will provide instantaneous alerting through logging and behavioristic API’s.  Machine learning and newly evolving AI algorithms are tracking inbound and outbound data, exposing the unknown to the light.

In a world where AE security is known and trusted, such creates a potential for collaboration, and creativity by bringing the application to the data where actions are faster, cheaper, and verdant.  As cloud services adoption increases because of price, flexibility, and popularity, data gravity will continue to create a data singularity surrounded by equally massive layers of compute.  Already underway, such centralization is causing a tectonic shift in how and where creative and talented people live and work.  Companies will be pulled into the gravitational field of the nucleus to compete and remain because of the talent and way innovation is evolving.