I was recently put in contact with Greg Raleigh, the scientist and serial entrepreneur who invented and commercialized the MIMO (multiple input/multiple output) technology used in 4G wireless networking. Raleigh has been in the scientific field for over 30 years, holds engineering degrees from California Polytechnic State University and Stanford, and earned a Ph.D from Stanford. He’s had his hands immersed in the concept of MIMO for almost 20 years and was named by Network World as one of “the 50 most powerful people in networking.” Currently he presides over the company ItsOn (which he co-founded) as CEO and Chairman. Here’s a transcript of our conversation about mobile networking.
Scott Matteson: Can you tell us about your work with 4G?
Greg Raleigh: “4G is based upon MIMO – multiple input, multiple output. It uses multiple antennas at both input and output of a wireless connection. I discovered the concepts in the mid 1990’s. Nobody believed it would work at first. I first rigorously derived a precise mathematical theory to prove it would work and then moved directly into designing the signal processing, coding and wireless electronic structures that would allow us to realize that the first commercial MIMO system. Once the theory was proven I researched the advantages and disadvantages of the various possible MIMO modulation and coding approaches and concluded that FFT based MIMO-OFDM with multi-dimensional signal encoding would be highest performance for lowest cost. That work placed MIMO-OFDM on a course for adoption in most new wireless communication standards that would follow including the now pervasive standards of 4GLTE for cellular and 802.11N and 802.11AC for Wi-Fi.
“The essence of MIMO is that it uses multi-dimensional coding across multiple transmit antennas and multiple receive antennas to receive the multi-dimensional signals in a way that exploits multipath to multiply the speed and improve the reliability of a wireless communication link without increasing the required amount of frequency bandwidth. The way it works is very deep mathematically, but in a nutshell the everyday explanation is this: in real world applications of our wireless phones and other devices, a signal doesn’t go straight from a transmitting antenna to a receiving antenna in all practical settings. It bounces off things. If you think about it, you can never see the base station when you’re using your cell phone. The base station is hidden behind buildings and hillsides and trees and all sorts of things. The signal you receive is actually a reflected signal – not from one object but it’s reflecting from hundreds or thousands of objects each creating a unique signal path from transmitter to receiver. For 100 years those signal reflections caused problems for conventional types of radios so scientists and engineers tried to “fix” what nature does to the wireless link better by ‘mitigating’or ‘getting rid of’ the effects of multipath. The best way to think about why MIMO was so different is this – if you have enough antennas at the input and output of the radio link, then you can use every single reflection path to create a new communication link that carries more information, or bits, over the same frequency channel. So, if I have 1,000 reflection paths, 1,000 transmit antennas and 1,000 receive antennas you can communicate 1,000 times the information – or 1,000 times the speed in other words. This turned a century of radio science upside down.
“Success has many fathers; many people will say they somehow had something to do with 4G, but I am the sole inventor of these core concepts, theories and practical electronic solutions behind 4G as documented in the first patents and academic papers that are all available in the public record. I started calling it 4G in 1996 and got criticized because they said ‘We’re barely getting going with 3G and you’re already talking about 4G’ and I said ‘that’s fine because 3G is not going to provide the speed and reliability to deliver video and music and all the things we want to do with cellular and Wi-Fi wireless, and pointed out that the 3G network would clog up as soon as people used it for such applications. It was clear we were going to need something ten times faster, and that was going to require MIMO.
“After completing my Ph.D research, papers and oral defense, along with some friends I built my first company – Clarity Wireless. We developed prototypes and developed the core technology underpinning a MIMO OFDM radio. We built a radio system with much higher performance in the presence of multipath than anyone had achieved until that point. Clarity was acquired by Cisco where we developed commercial product line for multi-antenna OFDM wireless networking – both point to point and point to multipoint systems with very high data rates.
“After working with Cisco for several years we started a second company, Airgo Networks, with the goal of making Wi-Fi so fast and so reliable that we could go completely wireless for homes and offices – which was unheard of at the time. We first attempted to convince the Wi-Fi industry to adopt a MIMO-OFDM system we designed as a new interoperability standard. By this time the idea that MIMO could in theory multiply speed in the presence of multipath was generally accepted, but no one believed that there would be “enough multipath” in real world settings for MIMO to achieve many times the speed of conventional radios. Also, no one believed it was possible to make inexpensive MIMO chipsets for consumer products. So they rejected our MIMO standards suggestion. So we just went ahead and developed pre-standard MIMO-OFDM chipsets that were very reasonably priced. They were interoperable with standard 802.11a/b/g chipsets and improved performance if our chips were on one side of the link, and when our chips were on both sides of the link they implemented the world’s first mass market ‘True MIMO’ technology. Most of the largest Wi-Fi consumer equipment manufacturers started buying our chips and making a new generation of Wi-Fi consumer products that performed over 10 times faster with 10 times the coverage in real world 3rd party testing. This resulted in rave reviews for our customers’ products and they began taking market share. This is when the industry decided it was a good idea after all to create a MIMO-OFDM Wi-Fi standard, so our technology was adopted in the 802.11N Wi-Fi interoperability standard.
“Once we proved that MIMO-OFDM could be implemented in very high performance and low cots chipsets, this very quickly led to the adoption of the same type of MIMO-OFDM technology in all the 4G standards that were being worked on at the time, including 4G LTE, 4G WiMax and others. Qualcomm acquired Airgo and at Qualcomm the same Airgo team was instrumental in creating the 802.11AC standard, which is an extension of MIMO-OFDM to higher bandwidths. The team also provided the core MIMO signal processing algorithms that make Qualcomm’s LTE chipset the highest performing in the world to this day..
“So, that’s how 4G came about. Probably the most interesting aspect of how MIMO was invented is the fact that rather than doing what everyone else had done for a century, trying to use conventional wireless technology concepts to ‘fix’ or ‘mitigate’ multipath, I asked a fundamentally different question – ‘what is the right wireless technology to use in a wireless link with multipath if you have multiple antennas at the input and the output of the link?’ Asking the question this way forced me to derive a new mathematical theory that led to the discovery of an entirely new class of wireless signal processing techniques that made our wireless devices 10 times better today.”
Scott Matteson: So in terms of 4G and 5G comparisons, is 5G the next evolutionary logical step or a different paradigm entirely?
Greg Raleigh: “5G is, frankly, just a marketing acronym that doesn’t mean any one thing today. It’s a mix and match of different wireless ideas and some marketing spin. That’s not to say that some of the ideas are not valuable – in fact some of the ideas will be implemented and will provide a great deal of improvement in wireless network speed. However, if you talk to ten different individuals, ten different companies, or ten different mobile operators you’ll get 10 different answers to the question ‘Exactly what is 5G?’ 4G is very specific and was based on my invention of MIMO to exploit multipath. It’s not just multiple antennas at the transmitter and the receiver but it employs multi-dimensional signal processing to take advantage of multipath – that completely defines what 4G is. 5G is a mix and match – more MIMO, variations of MIMO than what was employed in 4G. It’s small cells, fiber back haul, really a lot of incremental improvements on the radio infrastructure. Many of these things will make a big difference. Small cells are most likely the main source of speed improvement. The other idea that has very important implications but will take longer to implement is using small cells, small base stations in a coordinated way to enhance MIMO. MIMO doesn’t have to be at one base station to receive and transmit to one handset; you can also use MIMO in the sense of multiple antennas at multiple base stations all participating in the same radio link with a multiple antenna handset. That’s called distributed MIMO. I wrote patents on that in 1994, 1995 and 1996 and that’s just started to come into play as the microprocessors and the chipsets that do the digital signal processing are becoming fast and powerful enough to do that kind of processing to bring all the digital signals back to the central processing point over fiber so you can operate on multiple signals at the same time. The two biggest things will be small cells and distributed MIMO, then you need fiber to the small cells to make those things work. Those are the 3 things that will contribute to further advancements, speed and reliability of urban networks – small cells with fiber backhaul at the corner of buildings and tops of light poles etc., then distributed MIMO. You can call it 5G if you want but those are just the inevitable technology evolutions that will occur in the cellular network. I think ‘5G’ is overused, misused and abused.”
Scott Matteson: Where are carriers right now in terms of 5G? How close are they to coming up with something workable?
Greg Raleigh: “These things that they’re doing don’t need a ‘5G’ standard, per se. If you get into distributed MIMO you may need a standard, but you don’t need one for fiber back haul to small cells which they’re already doing. There are urban center markets in the U.S. that have hundreds of small cells already. As that technology is developed and perfected on a large scale that’s going to give you ten times the capacity in urban centers. Verizon, AT&T and others are doing this and there are mobile operators doing it in Europe. You don’t need anything you don’t already have today to do that; it’s just a matter of mobile operators rolling up their sleeves and deploying the capacities. If you put a 4G LTE base station on every street corner you’re going to blow away what you can do with Wi-Fi. You’ve got more capacity than Wi-Fi in a controlled spectrum. The difference between Wi-Fi and LTE is that LTE is run in licensed spectrum and the mobile operator can control the service quality because they own the spectrum. No one is allowed to interfere. If you try to put up these metro Wi-Fi cells what ends up happening is the coffee shop puts up an access point right next to where the mobile operator or city puts one up and they interfere with one another and the more Wi-Fi is deployed by more parties the worse the performance becomes. For this reason Wi-Fi just isn’t as effective as LTE at providing high quality services for things like voice and video communication.
“So, small cell LTE is an absolute no-brainer. There’s nothing stopping anyone from doing it now – you don’t need 5G or anything else. They need to reduce the cost of the small LTE base stations, but they’re already reasonable. In fact, although it’s a little different level of reliability, you can get an LTE “femtocell” for your home for a couple of hundred bucks – and some mobile operators will give you one so they can save the macro cell (main network) capacity you consume when you are at home. It’s a little more money than a Wi-Fi access point but they haven’t achieved the same volume scale yet so when they do the price will eventually come down maybe a little more than the Wi-Fi access point.
“When you look at the economics of a small cell deployment, it’s not going to be the cost of the LTE base station that drives the mobile operator network costs – it’s acquiring lease space on poles and building corners, laying the fiber and then maintaining the network. The inevitable 4G small cell solution is already obvious. Some say that it will be Wi-Fi instead of 4G and surprisingly some operators are playing with this. Since most of the costs are in leasing the base station space, laying fiber and maintaining the base stations – these costs are the same whether the operator uses Wi-Fi access point or small cell or femtocell LTE base station. The carrier owns the LTE spectrum and they can control the quality, so why in the world would you ever do Wi-Fi? It would be silly to spend money on a lot of Wi-Fi infrastructure when they can spend the same amount on infrastructure where you can control the spectrum quality and therefore the service quality to network users.”
Scott Matteson: It looks to me like the trend here is that LTE may entirely replace Wi-Fi at some point, or at least it should?
Greg Raleigh: “You will clearly still use Wi-Fi for home and office. Wi-Fi will also still be used by any entity that doesn’t own spectrum – corporations, universities, municipalities, etc. Those are private networks. Mobile operator networks will still use some Wi-Fi and Wi-Fi assets from companies like Boingo and so on while they’re working on small cell deployments, and Wi-Fi will allow them to cover devices that have Wi-Fi modems but not 4G LTE modems, but ultimately it’s an inevitable conclusion that it won’t cost them more to deploy LTE. They control the quality of the spectrum and don’t have to worry about someone putting up a private Wi-Fi cell that destroys their coverage quality next to one they pay a lot of money to maintain. They also don’t have to worry about a competitor trying to use Wi-Fi that harms the quality of their network.”
Scott Matteson: What sort of dates do you anticipate are involved here?
Greg Raleigh: “I’m actually surprised small cell LTE is not going faster – I think it just comes down to how quickly do the operators need new capacity and how quickly do they work through the decision distraction surrounding the various options. I think that in most cities you get quite good capacity in LTE from multiple carriers now. In the U.S. you’ve got four options for good 4G LTE coverage in most cities. The combination of MIMO technology and reduced cell size has already increased capacity by tenfold over where we were six or seven years ago. People watching videos during the busy hours are not having too much trouble with network congestion.
“As more and more devices are added, as people do more bandwidth-intensive things on their devices, app developers and content providers move to HD rather than SD streaming, and the Internet of Things takes off, then eventually the current network base station density will run out of capacity again. The way out of that is to bring down cell size, and that’s what will happen. We looked at this roughly three or four years ago we did a very careful analysis since we were considering being venture angels to help start another company to advance MIMO with some friends we know, but frankly we decided not to do it because it won’t be necessary for a long time. We know all sorts of things we can do to improve the capacity of the small base station with uber-MIMO techniques, but came to the conclusion that reducing cell size will allow the mobile operators to go forward for perhaps 5 to ten more years using existing base station technology and driving down the cost of the base station by driving up the scale in their fiber networks. There are some things to work through – the handoffs need to occur more quickly between cells such as if you’re driving, but these are fine point technicalities that are a lot easier than getting the world to adopt a new standard. Once the operators exhaust the bandwidth available with small cell 4G LTE then there are things you can do – the cells can do coordinated MIMO for example, so there is room for another radio standard, but it’s not actually needed for quite some time. We decided it was a little before it’s time to invest in new radio techniques.
“It’s different for people working on metro-area Wi-Fi and so on – such as in a municipality. You don’t own the spectrum, so if you want to give something away for free you can put up Wi-Fi, but that’s going to cost the taxpayers a lot of money, given that they’re already going to have access to the carrier. Some feel this is a little bit misguided because the use of government funds provides something people already have.”
Scott Matteson: Do you see any special challenges or roadblocks for the mobile device manufacturers?
Greg Raleigh: “No. Not with the roadmap I’m suggesting since it’s just a standard LTE chipset and LTE already has backwards compatibility to 3G. When you make smaller cells you get more spectrum capacity and there is no need to change the device modems – that’s why it’s so obvious. It takes years and years to convince the industry on the specifics of a new standard and then get the chipsets into phones, then it’s years and years more before there are enough new phones with the new standard chipsets to make a big difference. The way a cellular network works is that a mobile operator may have, for example, three to five frequencies that they operate on. A radio that uses frequency 1 doesn’t sit next to another radio in a base station that uses frequency 1; it sits next to frequency 2, and that sits next to frequency 3, etc. Eventually the base stations are far enough apart that the radio signals for frequency 1 from the first base station has attenuated or died down so that frequency 1 can be reused by another base station. That’s called ‘frequency reuse.’ When you divide cells down then the amount of distance you need between base stations before you can reuse frequency 1 is greatly reduced and this reduces the area that frequency 1 needs to cover – meaning fewer people share the network bandwidth available on frequency 1. That means you get more spectrum capacity for the network and more speed for the user. Before, in a macrocell frequency 1 might have covered many tens of square miles, before it could be reused. But now if you put a hundred base stations in and they are located lower than building height with smaller coverage footprints the signals will attenuate and you can use frequency 1 perhaps anywhere from 20-30 times in that same ten square mile area. If you reduce the coverage area of the base station by 10X you’ll need 10X more base stations and have 10X more capacity. This is why small cells are inevitable and happening now in the US and Europe.
“We’re switching gears. I solved radio problems for 20 years, then said ‘now the radio problem is solved, now we need to solve the service problem’ which I think is much more interesting now that we are streaming HD in most cities and even rural areas. It involves improving the core network, not the base stations. Also, improving the IT system in order to make it easy for users to buy services on the fly, so they never have to go to a store or website or make a phone call to their mobile operator. Customers can get certain things for free, they can receive loyalty offers, everything is automatic with popups on the device when they need to buy something. It’s much more cost effective – customers can manage their kids’ accounts, add a device, drop a device, etc. It becomes simple to use wireless and never have to talk to anyone to do it, just like everything else on the mobile Internet. It’s crazy that we buy our socks and washing machines on an app on our cell phone, but we can’t buy our wireless services on our devices. As we make larger and larger capacity available to the world, how we monetize that capacity for the mobile operators is the key. Making it seamless and effortless for users to grab capacity for any device, any family member, or any employee becomes the hot thing. That’s what we’ve been working on at ItsOn for six and a half years.”
Scott Matteson: Anything else you’d like to share with my readers? Any predictions or assessments to part with?
Greg Raleigh: “Within the next three plus or minus years capacity in the urban areas is going to be the same for all the operators – and there will be tons of bandwidth to all consumer and business mobile devices. There’s not going to be a difference between one mobile network and another. We’re already reaching parity in the U.S. between four operators, and many parts of Europe have been there for years. The things we just talked about – small cells, fiber back haul, enhanced types of MIMO that use multiple base stations – are all going to contribute to increasing capacity. Networks won’t be a differentiator. What will differentiate wireless service is how you buy it, how easy and convenient it is, how customizable it is. Services are going to get much more user friendly. There will be new ways to monetize service where people don’t even pay for their service. For instance, sponsors will pay for service if you want music or video then service comes with it. If you use an insurance app then you get $5 of free service, for example. There will be new models like that. It’s going to be a really fantastic transition for the user experience. The mobile operators are going to have to focus on the IT stack more than they do with their radio networks as things go forward. The mobile operator IT stack is moving from tons of multi-vendor silos to a single vendor cloud connected to very sophisticated device software that engages the user in a real time contextual experience. That will transform the IT stack into a digital service machine or service factory where an operator can respond to a new market need by creating, testing and deploying service in a few hours rather than months or years. That’s really going to be where the game is: digital service factories for mobile operators. All these exciting things we just talked about such as the base stations are really important and inevitable. That’s why we’re doing what we’re doing at ItsOn since we saw this coming six years ago. We considered doing more things in the radio world, but we concluded that MIMO plus small cells is really going to solve those problems and there’s not much more needed.”
The race to 5G: Inside the fight for the future of mobile as we know it
Moogsoft is bringing 5G monitoring to the table
Why the future of BlackBerry has nothing to do with hardware
From privacy to productivity: A look at how virtual reality could change the way we work