The mobile market has been using the removable SIM card since 1991, where it is mandatory for GSM devices. These are ideal for mobile handsets, which are typically bought through specialist retail outlets that set up the handset onto the network chosen by the user. They are not ideal for M2M applications though and, as we head towards the higher volumes forecast for M2M and IoT, that problem just gets worse. So what to do?
Most M2M applications don’t go anywhere near a retail outlet or, if they do, not ones in any way set up to deal with SIM cards. Instead, SIM cards are typically delivered in relatively small quantities to the point of use. Once installed, changing them at a network operator contract change is not so easy. Most times, it involves visiting the device on site. Often it is then not readily accessible. For those that are accessible, there is a high risk of theft.
All of that adds a lot of cost and is not very scalable. So when the industry talks about moving to huge volumes it is clear that something has to change. The embedded SIM is part of that change. This is installed during manufacture like any other component and the provisioning – to bring the connected device online with the chosen network operator – is carried out remotely over the air. That not only streamlines the manufacturing process, it can also streamline the retail process as well.
It turns out that’s only half the story though. Over the last few years embedded SIM solutions have started to be introduced into the M2M market. That’s great, because it gets the market going and gives the market what it needs. The trouble is, these proprietary solutions haven’t been compatible with each other. That does not scale well either and has the effect of reducing choice after the initial purchase – it tends to lock the installed base of devices in. Ultimately, that approach will tend to fragment the market and create new costs – so working against scalability in a different way.
Beecham Research has been looking closely at this issue for the GSMA and talking with OEMs in the market – particular Auto OEMs. Their feedback is that, given one standard solution, they want to move quickly to a Connected Car market. If there’s one standard solution, that will aid the take-up of connected car services in the second hand market as well. In other words, when a new car fitted with such services is resold, there is a better prospect that the subsequent owner will also be able to use those services cost-effectively. That is what the GSMA Embedded SIM Specification aims to deliver, and the fact that this week all of the leading SIM vendors and leading Mobile Network Operators all committed to delivering this is good news for the M2M market.
Where cars lead, other products will follow, notably consumer electronics products.
The results of this research and the relevant press releases are available here for free download.
You can also see an infographic on the case for the GSMA Embedded SIM at this link
There is a lot of focus on IoT security right now and many claims of complete end-to-end solutions. In general, though, these solutions are essentially unique to each individual supplier and often built with relatively small numbers of connected devices in mind – perhaps a few tens of thousands or hundreds of thousands at most for a single enterprise user. So how do you scale this level of security to the enormous volumes projected for the Internet of Things? Forecasts talk of multiple billions of connected devices by 2020 and with increasing levels of interoperability required between individual solutions on multiple networks and from multiple vendors. This is a hacker’s dream that looks like being an accident just waiting round the next corner, or the one after that if we’re lucky.
Part of the problem is that there is no non-partisan leader to champion this at present. Standards take way too long, so what to do?
Looking at the problem, what we have is an environment where potential threats are accumulating at a fast pace all around us. It is becoming an inherently “noisy” place for connected devices. Something must be done about these because such threats could destroy infrastructure and even businesses. They would certainly affect development of the connected devices market. One can see this as a threat, or as an opportunity. We see it as an opportunity for new added value. In principle, what we have is a required change in the specification for all IoT solutions. In essence, it is another set of essential technical requirements that must be catered for.
A recent report from Beecham Research talked about the risk of killing the M2M patient with an expensive cure. M2M solutions are typically built to a cost, so implementing high cost security into every solution irrespective of the markets they’re serving is not going to work economically. Instead the report talked about the need for right-sizing security for each M2M solution, but how to do that economically?
We believe the answer to these challenges lies in an approach that builds from the ground up and becomes inherent in the design of future solutions. As such, it must involve the semiconductor level – an essential aspect that surprisingly has so far been largely missing from IoT security solutions to date.
On September 10, Beecham Research is launching a study that aims to develop an approach towards solving these challenges. An initial report will be published that examines the current security issues that need to be addressed for an effective and developing IoT market. Exclusively, this will also cover Government requirements in Europe and North America for security of what is increasingly being referred to as Critical Infrastructure, of which the Internet of Things is a part.
The study will then include an intensive industry collaboration stage over the next few months – starting in the semiconductor industry and then further up the value chain – followed by publication of a recommended framework and roadmaps for different types of application. More information on this is available from firstname.lastname@example.org.
When considering cellular connectivity, M2M has always been mainly about using low data rates. The aim for M2M solution designers has been to get the required information from a remote device using the least amount of bandwidth feasible for the job. Now that is beginning to change, but not in a uniform way. So why the interest in a high data rate technology like LTE?
In some markets there is now a forced move to 3G and even 4G as 2G networks are switched off to refarm the spectrum. As part of enabling this, higher bandwidths are being made available at low prices and this has raised the prospect in the market of creating richer M2M applications using more, cheap bandwidth rather than continuing the struggle to use the minimum. There are growing instances of considerably more bandwidth now starting to be used by some M2M applications and there is every prospect that this will continue. As part of this, there is increasing interest in using LTE for these.
Most M2M applications do not need that higher bandwidth though. Then there is the idea of the Internet of Things – possibly billions of sensors producing huge amounts of data, but the bandwidth required for each sensor is tiny.
What is often overlooked is that LTE is not just about high data rates, but low ones too. Because it is based on OFDM (Orthogonal Frequency Division Multiplexing) and on IP, the air interface can be split into several narrow band channels having different bandwidths. Release 8 permits channel bandwidths of 1.4. 3, 5, 10, 15 and 20 MHz with no fundamental change to the radio architecture. Allowing bandwidth to be assigned in a very flexible way could make it ideal for M2M and IoT applications.
This is where LTE-M (the M is for M2M) is heading, as outlined at a recent event by Sierra Wireless who are very much involved in developments in this area, initially through their acquisition of part of Sagemcom. See here for the original EU-funded project covering this. This is increasingly being referred to as Cellular for IoT and – intriguingly – may also cover local mesh networking as well as traditional cellular switching, all aimed at being low cost, low power and low data rate. There are still various options being discussed for this but the goal is for very low cost modules (under $5), very long battery life (more than 10 years) and low data rates . . . over existing cellular infrastructure. All of this can be two-way communication, unlike some other narrow band options. Too good to be true? We will have to see, but 2016 should see this start to come to market.
There is a tendency to think of Wearable Technology as directed entirely at the consumer market. This is not the case though. Increasingly, Wearable Technology is being used as part of business processes as well. Why? Because it can enable big improvements to knowledge transfer, increase productivity, add new security and a host of other benefits in the work environment. It can be fun too.
Last year, Beecham Research published a new Wearable Technology Application Chart showing 7 key sectors and 28 different application groups where there were current products on the market. Today, we have published our new report on this market which includes a revised chart that now shows 8 sectors and 34 different application groups. Follow this link for a free download of the new chart. The new report is also available at this link.
The new sector is Business Operations and we have now included it as a separate sector partly because of its growth potential and also partly because it can now represent an important stage in the evolution of new product ideas. Take Google Glass for example – Google’s mobile head-up display. Condemned as a fashion faux pas in the high volume consumer market for its geeky appearance, that same device is now being used for customer service applications where it allows staff to offer a more personalised experience. Function matters more than style in business. Virgin Atlantic is one company trialling Glass for this.
Other examples include Motorola’s Laser Ring Scanner, a bar code reader designed to be worn on the index finger, and Walt Disney World’s use of MagicBands throughout its resorts and theme parks providing a room key, park entry and money access.
In the work environment, fashion styling is not such a must have as in the consumer space. The emphasis is on improving business processes. If it looks great that’s a plus, but it’s not such a barrier if it looks a bit clunky. That means early designs that may not be ultra sleek have an opportunity to be tried out and perfected in the business environment first before being subjected to the style pressures of the consumer market – where, if anything, style matters more than function.
The business market for wearables is already large. As detailed in our latest report, we calculate it is about a third the size of the consumer market and growing strongly.
It seems to have become a received wisdom that, for M2M and IoT applications in the future, all the processing of remote machine data will take place in the cloud. But is that really the case? As far as we can see, intelligent devices at the edge are getting more intelligent – not less. So what’s really going on?
This issue was explored in detail in a recent white paper by Beecham Research for Oracle. Follow this link to download.
Firstly, this is not just an esoteric argument. It does matter where the intelligence for applications actually is. It matters for security, for power requirements, for data flows, for speed of response and for robustness of the solution. It has a big impact on the architecture of the solution and therefore for the support of it.
The thinking goes that M2M – and in particular IoT – is all about lots of dumb sensors out there with their data being sent to the cloud for processing. So where are all these dumb sensors? Most of them are currently attached to machines. In the future, even more will be attached to those machines, but in addition they will be in the environment immediately surrounding those machines as well. Why? At the moment, to monitor what is happening, but in the future we will need more sensor points in order to optimise performance. Therein lies the true opportunity – moving from remote monitoring to performance optimisation. In other words, automated control.
Sensors will in future be everywhere. No doubt about that. Good times for sensor manufacturers. So where will the data from these be processed for performance optimisation, or for traffic control or a myriad other real time applications? At the centre? What if the network goes down – does everything stop? Does it really make sense to send all data to the centre for processing, then send it back to remotely control a machine or device? How fast is the response time for that sort of solution?
Clearly a more robust solution is to have processing at the edge and at the centre, in a hierarchy. At the edge for speed of response and robustness, while at the centre perhaps for support, maintenance and Big Data analysis. One of the key new opportunities that IoT is aiming to address is sharing data between applications to create cross sector service opportunities. Depending on the application and the requirements, that might mean intelligent devices at the edges of networks connected not just to the Internet but to other devices as well. Business value will come from using the large amounts of resulting data and acting on it quickly, as closely and as automatically as possible, to create new services.
In other words, the future for efficient and effective cloud processing for IoT is to have huge amounts of processing at the edge as well. That’s where it gets a bit more complex . . .
Announced today (see this link for more details) – Telit Wireless Solutions acquires ILS Technology, a leading provider of a ready-to-use, off-the-shelf, cloud platform to connect enterprise IT systems to m2m-connected devices and machines for business-critical use.
This is a deal designed to bolster Telit’s previous acquisitions of GlobalConect – which formed the basis of module management services in m2mAir – and CrossBridge Solutions which provided connectivity solutions for the North America market. Both of these acquisitions covered network layer services associated with connectivity management so the acquisition of ILS Technology provides a complementary acquisition for application layer services.
On the face of it, this is both a bold and logical move by Telit to expand its services offering. It represents a critical move up the value chain from supplying modules, remote module management and connectivity to providing application support as well – in other words, supporting the management of the M2M data flowing through the connectivity. It also provides a common thread linking CrossBridge Solutions connectivity for the North America market and Telefonica/Jasper Wireless managed connectivity elsewhere – all provided as elements of Telit’s m2mAir.
ILS Technology, previously part of Park Ohio, has been mainly operating in the industrial applications environment to date using primarily fixed line connectivity (but some cellular as well) so some work will be required to move this more into the cellular environment. We await further developments on this with interest.
There is a lot of discussion going on right now about when the next smart watch will appear. Samsung is expected to launch their Galaxy Gear smart watch at the IFA show in Berlin next week. The apparent aim of doing this is to trounce Apple, who may or may not launch their rumoured iWatch at their own event on September 10 in San Francisco.
In view of this and some pretty optimistic forecasts that have been thrown together recently, it is worth exploring the question – who really wants a smart watch?
Over the last few years, it has become increasingly apparent that use of wrist watches is declining – especially in the under 30 age group – in that less people are wearing them. If you use a watch just for telling the time, then your mobile phone is often just as handy and if you take your phone everywhere, why bother to wear a watch?
Females are much less likely to wear a watch than males. When they do, it is often more of a fashion accessory and the form factor is often extremely important – it must generally be small and stylish, or look like a piece of jewellery. At the same time, ownership of smartphones among females is shooting up compared with that of males – to the point where in some countries like the UK there is now a higher percentage of female ownership of smartphones than male ownership.
Even among males, watches appear to be worn less these days than was the case even a few years ago. For those that do, style is also increasingly important. While some want a big and chunky watch, others want something more refined and perhaps less feature rich. In other words, people want to choose what they wear and their choices are very different.
Against this background, the smart watch is aiming to disrupt the watch market by adding a lot of new features and apps. Will this be sufficient to change what people want to wear on their wrists though? How different will they look? Will everyone suddenly want to wear basically the same thing or will they want to choose between many different style formats? If these products are physically quite large, possibly with wide straps, how appealing will they be to the large majority of the target buyers?
Beecham Research’s new report on Wearable Technology (Wearable Technology: Towards Function with Style – follow this link for more details) explores these issues using a new methodology of fashion profiles – 35 of them. In this market, it is not just a question of early technology adopters and late adopters. It is all about aspiration and style, who wants to wear what and why. Smart watches are aiming to introduce many new features and applications under the headings of communication, fitness, personal security, retail, wellness and others. Important though these undoubtedly are, unless these new products address the aspirational and style needs of their target users they are unlikely to actually be worn. For the Wearable Technology market to really take off, it is not in fact the technology that will make the sale. It is how it looks and the different options available to change that appearance that will really count.
We recently conducted a survey with Oracle of the M2M/IoT market to see what people are expecting regarding application intelligence for connected devices over the next few years. Does it need to increase? If so, is this at the network center or at the edge? What are the implications of this? We presented the results at a webinar on June 27 – available here if you missed it and want to catch up on it.
Not too surprisingly, everyone in the survey reckoned that application intelligence for connected devices needs to increase in the next 3 years. Perhaps more surprisingly – in view of the increasing interest in cloud-based services – the vast majority of these also thought that greater intelligence will be required at the network edge. In other words, forget the idea that edge devices will stay the same or get dumber with all the intelligence for applications migrating to the center. It’s not going to happen like that. Why’s that then?
Firstly, the survey found that a full 81% of those expecting more application intelligence being required were expecting the need for real time decision making at the edge to increase. Why? M2M has always been about real time data, but it has also been about sending that data from a remote device or sensor to a data server typically at the center for processing and subsequent distribution of information. So why the greater need for more real time decision making at the edge? In our view this reflects a move towards optimization of operations rather than just monitoring them. This not only needs more data, it needs it more quickly. That’s of course very consistent with the ideas behind the Internet of Things.
In line with this, respondents also expected a greater need for local data storage at the edge. In addition to all of this though is the greater need for solution security right from the edge all the way through to the center. That also requires greater intelligence at the edge.
It’s all happening at the edge then. Is that the whole story? Catch the next webinar in the series in October for more on this.
The IMC (International M2M Council) was finally launched at the CTIA show this last week, after much discussion and meetings in Europe and North America over the last 18 months. We see this as a significant and very welcome new trade organization – one that is dedicated to giving an international voice for the M2M/IoT community and does not view M2M through the narrow perspective of a single technology, product category or vertical industry. The key aim will be to bring together vendors who collectively form the M2M Solution Provider community with Adopters of M2M solutions into a single membership organization. We believe it will be highly complementary to other regionally-focused M2M organizations, notably the increasingly successful M2M Alliance in Germany. Founding members of the IMC are Deutsche Telekom, Digi, Kore Telematics, Oracle, Orbcomm and Telit with more to follow soon we believe. See the website for more details – www.im2mc.org
Elsewhere, a trend we have mentioned before was very much in evidence at the show this year: the shift in focus from connectivity to extracting value from connected device data. While device connectivity will always be important – crucial even – it is becoming more accepted in the market as a given and something to build on. For the last four years, Beecham Research has drawn a distinction between what we refer to as the network layer (managed connectivity) and the application layer (managed data) within the overall scope of M2M Service Enablement Services (SES) – M2M platforms. We also predicted long ago the necessary shift in focus that is now evident: towards the application layer that has more intrinsic value than the network layer. Expect lots of activity – acquisitions and new development announcements – in the M2M market centred around the SES application layer during the next 6-12 months! Such activity will lead the market to a whole new ballgame – of which more later . . .
The most interesting and practical new service we saw at the show was Telenor Connexion’s (www.telenorconnexion.com) Split Billing Suite, initially aimed at the Connected Car market but with lots of potential elsewhere as well – for example in the Smart Home environment. The principle behind this is the need to find a way of sharing a single network connection between many different and diverse services, each often being provided by completely different service providers. Think vehicle diagnostics versus in-car entertainment. How do you get two such diverse services to share a single connection out of the car, so you don’t end up with dedicated connections for each service that may well price both of them out of the market? Who pays for that single connection? How does each independent service provider get access to that connection? Telenor Connexion’s answer is to offer an intermediate service that offers deep packet inspection so that service packet streams can be separated out in real time and directed to each service provider independently. We think this is an elegant approach to a problem that the M2M market has been wrestling with for some time.