Sensors play an integral role in numerous modern industrial applications, including food processing and
everyday monitoring of activities such as transport, air quality, medical therapeutics, and many more. While sensors
have been with us for more than a century, modern sensors with integrated information and communications
technology (ICT) capabilities—smart sensors—have been around for little more than three decades. Remarkable
progress has been made in computational capabilities, storage, energy management, and a variety of form factors,
connectivity options, and software development environments. These advances have occurred in parallel to a
significant evolution in sensing capabilities. We have witnessed the emergence of biosensors that are now found in a
variety of consumer products, such as tests for pregnancy, cholesterol, allergies, and fertility.
The development and rapid commercialization of low-cost microelectromechanical systems (MEMS) sensors,
such as 3D accelerometers, has led to their integration into a diverse range of devices extending from cars to
smartphones. Affordable semiconductor sensors have catalyzed new areas of ambient sensing platforms, such as
those for home air-quality monitoring. The diverse range of low-cost sensors fostered the emergence of pervasive
sensing. Sensors and sensor networks can now be worn or integrated into our living environment or even into our
clothing with minimal effect on our daily lives. Data from these sensors promises to support new proactive healthcare
paradigms with early detection of potential issues, for example, heart disease risk (elevated cholesterols levels) liver disease (elevated bilirubin levels in urine), anemia (ferritin levels in blood) and so forth. Sensors are increasingly
used to monitor daily activities, such as exercise with instant access to our performance through smartphones.
The relationship between our well-being and our ambient environment is undergoing significant change. Sensor
technologies now empower ordinary citizens with information about air and water quality and other environmental
issues, such as noise pollution. Sharing and socializing this data online supports the evolving concepts of citizen-led
sensing. As people contribute their data online, crowdsourced maps of parameters such air quality over large
geographical areas can be generated and shared.
Sensors utilize a wide spectrum of transducer and signal transformation approaches with corresponding variations
in technical complexity. These range from relatively simple temperature measurement based on a bimetallic
thermocouple, to the detection of specific bacteria species using sophisticated optical systems. Within the healthcare,
wellness, and environmental domains, there are a variety of sensing approaches, including microelectromechanical
systems (MEMS), optical, mechanical, electrochemical, semiconductor, and biosensing. The the proliferation of sensor-based applications is growing across a range of sensing targets such as air, water, bacteria, movement, and physiology. As with any form of technology, sensors have both strengths and weaknesses. Operational performance may be a function of the transduction method, the deployment environment, or the system components.
Key Sensor Modalities
Each sensor type offers different levels of accuracy, sensitivity, specificity, or ability to operate in different environmental conditions. There are also cost considerations. More expensive sensors typically have more
sophisticated features that generally offer better performance characteristics. Sensors can be used to measure
quantities of interest in three ways:
• Contact: This approach requires physical contact with the quantity of interest. There are many classes to sense in this way—liquids, gases, objects such as the human body, and more. Deployment of such sensors obviously perturbs the state of the sample or subject to some degree. The type and the extent of this impact is application-specific. Let us look at the example of human body-related applications in more detail.
Comfort and biocompatibility are important considerations for on-body contact sensing. For example, sensors can cause issues such as skin irritation when left in contact for extended periods of time. Fouling of the sensor may also be an issue, and methods to minimize these effects are critical for sensors that have to remain in place for long durations. Contact sensors may have restrictions on size and enclosure design. Contact sensing is commonly used in healthcare- and wellness-oriented applications, particularly where physiological measurements are required, such as in electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG). The response time of
contact sensors is determined by the speed at which the quantity of interest is transported to the measurement site. For example, sensors such as ECGs that measure an electrical signal have a very fast response time. In comparison, the response time of galvanic skin response (GSR) is lower as it requires the transport of sweat to an electrode, a slower
process. Contact surface effects, such as the quality of the electrical contact between an electrode and subject’s skin, also play a role. Poor contact can result in signal noise and the introduction of signal artifacts.
On-body contact sensing can be further categorized in terms of the degree of “invasion” or impact. Invasive sensors are those, for example, introduced into human organs through small incisions or into blood vessels, perhaps for in vivo glucose sensing or blood pressure monitoring. Minimally invasive sensing includes patch-type devices on the skin that
monitor interstitial fluids. Non-invasive sensors simply have contact with the body without effect, as with pulse oximetery.
• Noncontact: This form of sensing does not require direct contact with the quantity of interest. This approach has the advantage of minimum perturbation of the subject or sample. It is commonly used in ambient sensing applications—applications based on sensors that are ideally hidden from view and, for example, track daily activities and behaviors of individuals in their own homes. Such applications must have minimum impact on the environment or subject of interest in order to preserve state. Sensors that are used in non-contact modes, passive infrared (PIR) , for example, generally have fast response times.
• Sample removal: This approach involves an invasive collection of a representative sample by a human or automated sampling system. Sample removal commonly occurs in healthcare and environmental applications, to monitor E. coli in water or glucose levels in blood, for example. Such samples may be analyzed using either sensors or laboratory-based analytical instrumentation.
With sensor-based approaches, small, hand-held, perhaps disposable sensors are commonly used, particularly where rapid measurements are required. The sensor is typically in close proximity to the sample collection site, as is the case with a blood glucose sensor. Such sensors are increasingly being integrated with computing capabilities to provide sophisticated features, such as data processing, presentation, storage, and remote connectivity. Analytical instrumentations, in contrast, generally have no size limitations and typically contain a variety of sophisticated features, such as autocalibration or inter-sample auto-cleaning and regeneration. Sample preparation is normally required before analysis. Some instruments include sample preparation as an integrated capability. Results for nonbiological samples are generally fast and very accurate. Biological analysis, such bacteria detection, is usually
slower taking hours or days.
Web Real-Time Communication (WebRTC) is a new standard and industry effort that extends the web browsing model. For the first time, browsers are able to directly exchange real-time media with other browsers in a peer-to-peer fashion.
The standardization goal is to define a WebRTC API that enables a web application running on any device, through secure access to the input peripherals (such as webcams and microphones), to exchange real-time media and data with a remote party in a peerto-peer fashion.
The classic web architecture semantics are based on a client-server paradigm, where browsers send an HTTP (Hypertext Transfer Protocol) request for content to the web server, which replies with a response containing the information requested.
The resources provided by a server are closely associated with an entity known by a URI (Uniform Resource Identifier) or URL (Uniform Resource Locator).
WebRTC extends the client-server semantics by introducing a peer-to-peer communication paradigm between browsers.
In the WebRTC Trapezoid model, both browsers are running a web application, which is downloaded from a different web server. Signaling messages are used to set up and terminate communications. They are transported by the HTTP or WebSocket protocol via web servers that can modify, translate, or manage them as needed. It is worth noting that the signaling between browser and server is not standardized in WebRTC, as it is considered to be part of the application. As to the data path, a PeerConnection allows media to flow directly between browsers without any intervening servers. The two web servers can communicate using a standard signaling protocol such as SIP or Jingle (XEP-0166). Otherwise, they can use a proprietary signaling protocol.
The most common WebRTC scenario is likely to be the one where both browsers are running the same web application, downloaded from the same web page.
WebRTC in the browser
application also interacts with the browser, using both WebRTC and other standardized APIs, both proactively (e.g., to query browser capabilities) and reactively (e.g., to receive browser-generated notifications).
The WebRTC API must therefore provide a wide set of functions, like connection management (in a peer-to-peer fashion), encoding/decoding capabilities negotiation, selection and control, media control, firewall and NAT element traversal, etc.
Let us imagine a real-time audio and video call between two browsers. Communication, in such a scenario, might involve direct media streams between the two browsers, with the media path negotiated and instantiated through a complex sequence of interactions involving the following entities:
Radio frequency identification (RFID) is becoming commonplace in everyday life these days. From tap-and-go payment cards and transit passes to E-ZPass devices used on toll roads to the tags stuck on and sewn into consumer goods to manage inventory and deter theft, most of us encounter RFID tags at least a few times a week and never think about
what can be done with this technology.
In the past few years, a new term has started to bubble up in connection with RFID: near field communication (NFC).Though NFC readers can read from and write to some RFID tags, NFC has more capabilities than RFID, and enables a greater range of uses. You can think of NFC as an extension of RFID, building on a few of the many RFID standards to create a wider data exchange platform.
Imagine you’re sitting on your porch at night. You turn on the porch light, and you can see your neighbor as he passes close to your house because the light reflects off him back to your eyes. That’s passive RFID. The radio signal from a passive RFID reader reaches a tag, the tag absorbs the energy and “reflects” back its identity.
Now imagine you turn on your porch light, and your neighbor in his home sees it and flicks on his porch light so that you can see him waving hello from his porch. That’s active RFID. It can carry a longer range, because the receiver has its own power source, and can therefore generate its own radio signal rather than relying on the energy it absorbs from the sender.
RFID is a lot like those two porches. You and your neighbor know each other’s faces, but you don’t really learn a lot about each other that way. You don’t exchange any meaningful messages. RFID is not a communications technology; rather, it’s designed for identification. RFID tags can hold a small amount of data, and you can read and write
to them from RFID readers, but the amount of data we’re talking about is trivial, a thousand bytes or less.
Now imagine another neighbor passes close, and when you see her, you invite her on to the porch for a chat. She accepts your invitation, and you sit together, exchange pleasantries about your lives, and develop more of a relationship. You talk with each other and you listen to each other for a few minutes. That’s NFC.
NFC is designed to build on RFID by enabling more complex exchanges between participants. You can still read passive RFID tags with an NFC reader, and you can write to their limited amount of memory. NFC also allows you to write data to certain types of RFID tags using a standard format, independent of tag type. You can also communicate with other NFC devices in a two-way, or duplex, exchange. NFC devices can exchange information about each other’s capabilities, swap records, and initiate longer term communications through other means. For example, you might tap your NFC-enabled phone to an NFC-enabled stereo so that they can identify each other, learn that they both have WiFi capability, and exchange credentials for communication over WiFi. After that, the phone will start to stream audio over WiFi to the stereo. Why doesn’t the phone stream its audio over the NFC connection? Two reasons: first, the NFC connection is intentionally short range, generally 10cm or less. That allows it to be low-power, and to avoid interference with other radios built into devices using it. Second, it’s relatively low-speed compared to WiFi, Bluetooth, and other communications protocols. NFC is not designed to manage extended high-speed communications. It’s for short messages, exchanging credentials, and initiating relationships. Think back to the front porch for a moment. NFC is the exchange you have to open the conversation. If you want to talk at length, you invite your neighbor inside for tea. That’s WiFi, Bluetooth, and other extended communications protocols.
What’s exciting about NFC is that it allows for some sophisticated introductions and short instructions without the hassle of exchanging passwords, pairing, and all the other more complicated steps that come with those other protocols. That means that when you and your friend want to exchange address information from your phone to his, you can just tap your phones together. When you want to pay with your Google Wallet, you can just tap as you would an RFID-enabled credit card.
When you’re using NFC, your device doesn’t give the other device to which it’s speaking access to its whole memory—it just gives it the basics needed for exchange. You control what it can send and what it can’t, and to whom.
Navigation, in the context of this post, is defined as movement of people from one location to another. Navigation assistance is a means by which people can be provided with information about their navigation needs and preferences.
Since these two navigation environments, i.e., outdoor navigation and indoor navigation, share common characteristics and have differences. The advent of computers, among other technologies, paved the way for the development of digital navigation devices and tools. The digital navigational device GPS has had a tremendous impact on the development of modern navigation technology. The impact has been in threefold. One is that GPS has allowed ubiquitous, anywhere, anytime, positioning, with a level of accuracy and reliability suitable for a wide range of land-based navigation activities. Second is that as GPS has been used in numerous existing and new applications,
people have realized its benefits, especially for land-based navigations.
The evolution of outdoor navigation technology is divided into four generations.
The first generation of navigation technology (~1985 – ~1995) offered basic features and functions; was available for very limited geographic areas (selected cities); offered limited routing options (mainly shortest route); was only available as in-car gadgets (they were installed as luxury gadgets by the automobile manufactures on selected cars); and provided navigation assistance to the general population.
The second generation of navigation technology (~1995 – ~2000) offered basic features and functions. However, as advanced techniques were developed and feedbacks from users were incorporated, the features and functions in the second generation were the improved version of those in the first generation. The second generation of navigation technology was made available for various geographic areas (many cities); supported limited routing options; was available as portable devices; and provided navigation assistance to the general population.
The third generation of navigation technology (~2000 – ~2005) offered advanced features and functions; provided (optional) wireless connection (primarily to obtain real-time data); was available on mobile devices and personal digital assistants (PDAs); and offered routing options that met the preferences by the general population. Figure 1.5 shows an example of outdoor navigation systems in the third generation.
The fourth generation of navigation technology (~2005 -), which is the current trend, offers navigation services with a variety of new features addressing personalized navigation needs and preferences anywhere, anytime and for any user. Navigation technology in each of the first three generations can be characterized as generic and system-oriented, assisting with general navigation activities, and are either incar navigation systems or portable navigation devices. The fourth generation of navigation technology is characterized as personalized and service-oriented, assisting with navigation activities at individual level, where services are provided by navigation service providers.
System-oriented navigation assistance and service-oriented navigation assistance can be distinguished by data storage, computation, and communication. Navigation systems are stand-alone devices that can provide navigation assistance without connection to external services where they contain all the required data and can provide all the required computations. Navigation services are provided through mobile devices (e.g., smartphones) where most of the required data and most of the required computations are provided through remote servers maintained by service
Geo-positioning is at the heart of navigation systems/services for outdoors in that most navigation activities are dependent upon position information provided by geo-positioning sensors, primarily Global Satellite Navigation System (GNSS). GIS contributes by providing core static data including maps and core navigation functions such as routing in navigation systems/services. Wireless communication contributes by providing real-time (dynamic) data, such as traffic, in navigation systems/services for outdoors. As shown in this figure, while wireless communication can be used as a geo-positioning sensor (e.g., WiFi), its geo-positioning role is not as dominant as GPS is for navigation in outdoors.
Compared to outdoor navigation systems/services, the evolution of indoor navigation systems/services has a much shorter time span. This is perhaps due to the fact that navigation in outdoors is much more complex than navigation in indoors. Navigation in outdoors imposes certain constraints, such as real-time decision making (especially when driving), requiring solutions to navigation problems in a much larger space (geographic area) and finding optimal routes from a very large solution space (number of options). For example, a trip may require an optimal route among
many possible options between a pair of origin and destination locations in a large city, and while enroute to the destination a new route may be needed due to change in weather or traffic or occurrence of accidents.
The evolution of indoor navigation technology can be divided into two generations
the first generation of indoor navigation technology, which debuted in the mid-1990s, only a few geo-positioning sensors were available. In general, geo-positioning sensors for indoor navigation were scarce and unaffordable.
The second generation of indoor navigation technology, which debuted around early 2000s, has enjoyed new geo-positioning sensors and techniques which offer improved accuracy and are widely available and affordable.
It is important to note that compared to navigation in outdoors, where both navigation systems and services could be utilized for navigation assistance, navigation assistance in indoors is more meaningful and practical through navigation services on ubiquitous devices such as cell phones (increasingly smartphones).
While it is common and practical that people requiring navigation assistance for driving, biking, or walking in outdoors utilize navigation systems or services provided on mobile devices, it is hard to imagine that people would be walking within a building with specialized mobile devices to find a room in the building. On the other hand, it is conceivable to imagine that people would be provided with navigation assistance in indoors through navigation services on smartphones as they are becoming commonplace alleviating the need to carry extra devices for the purpose of navigation.
Like navigation in outdoors, geo-positioning is at the heart of navigation technology for indoors in that most navigation activities are dependent upon position information provided by geo-positioning sensors. However, unlike navigation systems/services for outdoors which are predominantly based on GNSS (e.g., GPS) for geo-positioning, GPS does not play a significant role in navigation systems/services for indoors. Instead, wireless communication (e.g.,RFID, WiFi), whose role in outdoor navigation is primarily for receiving real-time (dynamic) information, is the predominant geo-positioning sensor in indoors. For the reason that indoor navigation is not affected by environmental factors, such as weather condition, there is really little need to receiving real-time data by wireless communication. Computer-Aided Design (CAD) and Building Information Model (BIM) contribute by providing core static data including maps. As shown in this figure, navigation functions, such as routing, could be included as separate modules into navigation systems/services for indoors.
Audio and Video
Over the past decade, video on the web has exploded. As bandwidth has increased, and more people have access to high speed internet connections the likes of YouTube and Vimeo have gripped the imagination of web users. Before HTML5, the most common method for including video on a webpage was to render it using Adobe Flash. YouTube and Vimeo continue to use this approach by default, but both have started migrating to a more accessible and standards friendly HTML5 version <video> tag. The HTML5 <video> tag, <audio> tag are fast becoming the method for presenting rich media content in a way that is compatible with all devices, including smartphones.
More recently, many vendors including Apple dropped support for Flash from their mobile devices. The HTML5 specification has long proposed native video and audio in the browser, as part of its aim to reduce the amount of code and work required to deploy common media types to the web. As with other HTML5 enhancements, direct embedding offers numerous accessibility benefits, and search engine indexing improvements over Flash.
The usage is simple: use a <video> tag to embed video, and an <audio> tag to embed audio, and nest within the tag links to the different formats in which you have encoded your media. There are two competing standards H.264 and WebM and many more for audio.
In order to use HTML5 to render video, you need to encode your video and audio into multiple formats and link to each format within the <audio> and <video> tags ensure every HTML5 capable browser will be able to render your media. For older browsers that do not support HTML5, it is also safe to use H.264 encoded video only and provide Flash as a fall back for those who don’t support H.264 files.
Both new tags allow for fallback content, which makes it a simple process to upgrade your existing Flash embed code to make use of HTML5 without excluding older browsers and no direct need for browser sniffing scripts.
The growing market for location-aware applications, where content is specifically oriented towards both the user and their current position. These apps take advantage of a hardware enhancement common to most smartphones running software from Apple, Google and Microsoft. HTML5 offers us the ability to query the user’s location and tailor our web content accordingly.
Translating the user’s location into something meaningful is made easier with the likes of OpenLayers, OpenStreetMap, Bing Maps or Google Maps, and each of these offers an API allowing you to pass in a location expressed in latitude and longitude.