Location Based Social Networks   Leave a comment

Location-based Social Networks (LBSNs) can be considered as a special Online Social Network category. Actually, an LBSN has the same OSN’s properties, but qualifies location as the core object of its structure.

Recently, advances in broadband wireless networks and location sensing technologies led to the emergence of smart mobile phones, tablets etc. that allowed ubiquitous access to the Web. In this new era, users can benefit by getting ubiquitous access to location-based services from anywhere via mobile devices. Moreover, users can share location-related information with each other to leverage the collaborative social knowledge by using LBSNs.

LBSNs allow users to see where their friends are, to search location-tagged content within their social graph, and to meet others nearby. LBSNs consist of the new social structure made up of individuals connected by the interdependency derived from their locations in the physical world as well as their location-tagged media content, such as photos, video, and texts. LBSNs are a type of social networking in which geographic services and capabilities such as geo-coding and geo-tagging are used to enable additional social dynamics. It presents three layers, namely, the user, the location, and the content layers. It is obvious that someone can exploit information from each layer independently to leverage recommendations. For instance, we can compute the geographical distance (i.e. Euclidean distance) between each pair of places in the location layer. Moreover, we can calculate the similarity among users based on the social network that exists in the user layer. Regarding the content layer, we can compute similarity among the information objects (i.e. video, tags etc.) based on their metadata. Please also notice the ternary relation among entities (i.e. user, location, content), which goes through all layers.

Acquiring this abundant contextual information, LBSNs can improve the quality of services on: (1)generic (non-personalized) recommendations of social events, locations, activities and friends, (2) personalized recommendations of social events, locations, activities and friends, and (3) user and group mobility behavior modeling and community discovery.

Generic Recommendations

Generic Recommendations compute the same recommendation list (location, activity,event etc.) for all users, regardless the personalized preferences of each individual user. The most simple recommender systems are those based on counting frequencies of occurrences or co-occurrences of some given dimension. For example, a simple recommender system could just count the number of check-ins per
place, rank them and recommend those places with the larger number of check-ins.

A location recommender, for any user who travels in a specific city (e.g. New York), can first count each location’s frequency of check-ins. Then, it can recommend the top-n locations by sorting these locations in decreasing order of their scores and selecting the n most popular. Notice that an interesting location can be defined as a cultural place, such as the Acropolis of Athens (i.e., popular tourist destinations), and commonly frequented public areas, such as shopping streets, restaurants, etc. As far as the activity recommendations is concerned, an activity recommender can provide a user with the most popular activities that may take place at a given location, e.g. dinning or shopping.  A target user can provide to the system the activity she wants to do and the place she is (e.g. coffee in New York). Then, the system provides a map with coffee places, which are nearby the user’s location (i.e. EuroPan Cafe in location A) andwere visited many times (i.e. 17 times) from 16 people. All the aforementioned recommendations can guide a user in an unknown place of visit.

Personalized Recommendations

The personalized recommender systems rely on past “check-in” history of users. Then, they correlate them with other users that have similar preferences and suggest to them new locations, activities and events. In particular, a personalized recommender exploits the time that someone has visited a location and her explicit ratings or comments on that location and predicts her interest in unvisited places. As there are three approaches that have emerged in the context of recommender systems: collaborative filtering (CF), content-based Filtering (CB) and hybrid methods. In the following, we briefly discuss, the special characteristics of each approach in the LBSN field.

CF methods recommend those locations, activities and events in a city to the target user, that have been rated highly by other users with similar preferences and tastes. In most CF approaches, only the locations and users’ ratings are accessible and no additional information, i.e. locations or users, is provided. User-based CF, employs users’ similarities for the formation of the neighborhood of nearest users. User-based CF is an effective approach in terms of accurate recommendations. However, it cannot scale-up easily due to the high computation of user similarity matrix. In contrast, location-based CF algorithm employs locations’ similarities for the formation of the neighborhood of nearest users, reducing the problem of scalability. In any case, a pitfall of both user-based and location-based CF is the cold start problem: new locations have received only few ratings, so they cannot be recommended; new users have performed only few visits, so there can be hardly found other users similar to them.

CB methods assume that each user operates independently. As a result, it exploits only information derived from location features. For example, a restaurant may have features such as cuisine and cost. If a user, in her profile, has set her preferable cuisine to be Chinese, then the Chinese restaurants will be presented to her. Apparently, the limitation of these systems lies upon the fact that other people’s preferences are not considered. In particular, it exploits a set of attributes that describes the location and recommend other locations similar to those that existin the user’s profile. This way, the cold start problems, faced by CF methods, for new locations and new users are alleviated. However, the pitfall of CB is that there is no diversity in the location and activity recommendations.

The combination of social with geographical data, is becoming a way of handling shortcomings when only one type of data is taken into consideration. For example, the social graph (i.e. trust/friend connections) is not dealing with location analysis, whereas collaborative filtering maintains a user profile mainly based on rating data. The idea of a hybrid approach suggests that by using both data (i.e. social and rating data) it is possible to overcome each other’s shortcomings and make the recommendation result to be more accurate. A hybrid system is where geographical data are combined with social data to provide location
and activity recommendations. GPS location data, user ratings and user activities to propose recommendations to interested users along with appropriate explanations.

Integrating Sensors in Social Networks   Leave a comment

A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users.

Social networks have become extremely popular in recent years, because of numerous online social networks such as Facebook, LinkedIn and MySpace. In addition, many chat applications can also be modeled as social networks.  Social networks provide a rich and flexible platform for performing the mining process with different kinds of data such as text, images, audio and video. Therefore, a tremendous amount of research has been performed in recent years on mining such data in the context of social networks. In particular, it has been observed that the use of a combination of linkage structure and different kinds of data can be a very powerful tool for mining purposes. How one can combine the text in social networks with the linkage structure in order to implement more effective classification models. Other recent work uses the linkage structure in image data in order to perform more effective mining and search in information networks. Therefore, it is natural to explore whether sensor data processing can be tightly integrated with social network construction and analysis. Most of the afore-mentioned data types on a social network are static and change slowly over time. On the other hand, sensors collect vast amounts of data which need to be stored and processed in real time. There are a couple of important drivers for integrating sensor and social networks:

-One driver for integrating sensors and social networks is to allow the actors in the social network to both publish their data and subscribe to each other’s data either directly, or indirectly after discovery of useful information from such data. The idea is that such collaborative sharing on a social network can increase real-time awareness of different users about each other, and provide unprecedented information and understanding about global behavior of different actors in the social network. The vision of integrating sensor processing with the real world.

-A second driver for integrating sensors and social networks is to better understand or measure the aggregate behavior of self-selected communities or the external environment in which these communities function. Examples may include understanding traffic conditions in a city, understanding
environmental pollution levels, or measuring obesity trends. Sensors in the possession of large numbers  of individuals enable exploiting the crowd for massively distributed data collection and processing.
Recent literature reports on several efforts that exploit individuals for data collection and processing purposes such as collection of vehicular GPS trajectories as a way for developing street maps, collectively locating items of interest using cell-phone reports, such as mapping speed traps using the Trapster application, use of massive human input to translate documents, and the development of protein folding games that use competition among players to implement the equivalent of global optimization algorithms.

The above trends are enabled by the emergence of large-scale data collection opportunities, brought about by the proliferation of sensing devices of every-day use such as cell-phones, piedometers, smart energy meters, fuel consumption sensors (standardized in modern vehicles), and GPS navigators.
The proliferation of many sensors in the possession of the common individual creates an unprecedented potential for building services that leverage massive amounts data collected from willing participants, or involving such participants as elements of distributed computing applications. Social networks, in a sensor-rich world, have become inherently multi-modal data sources, because if the richness of the data collection process in the context of the network structure.  In recent years, sensor data collection techniques and services have been integrated into many kinds of social networks. These services have caused a computational paradigm shift, known as crowd-sourcing, referring to the involvement of the general population in data collection and processing. Crowd-sourcing, arguably pioneered by programs such as SETI, has become remarkably successful recently due to increased networking, mobile connectivity and geo-tagging. Some examples of integration of social and sensor networks are as follows:

-The Google Latitude application collects mobile position data of uses, and shares this data among different users. The sharing of such data among users can lead to signi􀂿cant events of interest. For example, proximity alerts may be triggered when two linked users are within geographical proximity of one another. This may itself trigger changes in the user-behavior patterns, and therefore the underlying sensor values. This is generally true of many applications, the data on one sensor can influence data in the other sensors.

-The City Sense application  collects sensor data extracted from fixed sensors, GPS-enabled cell phones and cabs in order to determine where the people are, and then carries this information to clients who subscribe to this information. The information can also be delivered to clients with mobile devices. This kind of social networking application provides a “sense” as to where the people in the city are, and can be used in order to effectively plan activities. A similar project, referred to as WikiCity,  developed at MIT, uses the mobile data collected from cell phones in order to determine the spatial trends in a city, and which the social streets might be.

-This general approach of collecting individual location data from mobile phones can also be used in order to generate interesting business decisions. For example, the project MacroSense analyzes customers location behaviors, in order to determine individuals which behave in a similar way to a given target. The application is able to perform real time recommendations, personalization and discovery from real time location data.

Automotive Tracking Application: A number of real-time automotive tracking applications determine the important points of congestion in the city by pooling GPS data from the vehicles in the city. This can be used by other drivers in order to avoid points of congestion in the city. In many applications, such objects may have implicit links among them. For example, in a military application, the different vehicles may have links depending upon their unit membership or other related data. Another related application is that of sharing of bike track paths by different users. The problem of fnding bike routes is naturally a trialand- error process in terms of finding paths which are safe and enjoyable.
The designs Biketastic, which uses GPS-based sensing on a mobile phone application in order to create a platform which enables rich sharing of biker experiences with one another. The microphone and the accelerometer embedded on the phone are sampled to infer route noise level and roughness. The speed can also be inferred directly from the GPS sensing abilities of the mobile phone. The platform combines
this rich sensor data with mapping and visualization in order to provide an intuitive and visual interface for sharing information about the bike routes.

Animal Tracking: In its most general interpretation, an actor in a social network need not necessary be a person, but can be any living entity such as an animal. Recently, animal tracking data is collected with the use of radio-frequency identifiers. A number of social links may exist between the different animals such as group membership, or family membership. It is extremely useful to utilize the sensor information in order to predict linkage information and vice-versa. A recent project called MoveBank has made tremendous advances in collecting such data sets. A similar approach may be used for commercial product-tracking applications, though social networking applications are generally relevant to living entities, which are most typically people.

Mashups for the Web of Things   Leave a comment

TheWeb of Things (WoT) together with mashup-like applications is gaining popularity with the development of the Internet towards a network of interconnected objects, ranging from cars and transportation cargos to electrical appliances. Here I will provide a brief architectural overview of technologies which can be used in Web of Things mashups with emphasis on artificial intelligence technologies such as conceptualization and stream processing and at data sources and existing Web of Things mashups.

Introduction:

The Web of Things is an emerging concept which extends already existing concepts such as the Sensor Web , where all sensor data and metadata would be published and available to anyone. The things themselves are everyday objects (i.e. coffee mug, chair, truck, robotic arm, etc.) containing a small computing and communicating device. This device is most often a sensor node, however, it can also be an active or passive RFID tag in which case computing is done at the server. The things currently form isolated networks, controlled by different entities, and most often the data remain closed and are rarely used to full potential. Connecting (or federating) the islands of things using web standards is referred to as the Web of Things (WoT).

The mashups for the Web of Things, also referred to as physical mashups, use raw or processed data coming from things, as well as already existing web data and services to build new applications. The development of such technology is expected to have a high impact on humanity, among others on efficiently servicing increasingly urbanized cities with food, transport, electricity and water in an environmentally sustainable way.

One way of looking at the Web of Things—is to see things as organs which detect stimuli. These are then sent via wireless or wired technology, typically on an IP/HTTP network, to processing and storage engines. These engines then crunch the received information and generate knowledge. Sometimes they can also trigger an action, such as sending a tweet. This is somewhat similar to how we, humans, function: we have five senses which are perceived by corresponding organs, then the stimuli are sent to the brain via the nerves, finally the brain processes these stimuli. The result is most often knowledge, and sometimes also actions can be triggered: the brain transmits commands via the nerves to the muscles which then contract and cause moving of hands, legs, talking, etc. One distinction is that while in the case of the humans the sensors and processors are spatially close to each other (e.g. nose and brain or ears and brain), in the case of WoT we may be looking at a global distributed system.

Architectural considerations:

The technological pipeline for the WoT,  The raw data and metadata coming from the network of things can be annotated and enriched—we refer to this as conceptualization—it can be stored using specific approaches for streaming and it can be processed using techniques such as stream mining, event and anomaly detection. WoT mashups can take and use the data at any of these stages.

Network of Things

The things are objects that can be digitally identified by some code such as Electronic Product Code (EPC), Radio Frequency IDentification (RFID), Near Field  Communication (NFC), Internet Protocol (IP) v4 or v6, etc. Using these digital identities, things can then be observed by tracking in production plants, warehouses, etc.; by observing usage patterns, by observing their context, etc.  focus on things that feature sensors and an  embedded device, mostly because the mashup we develop addresses environmental intelligence based on sensor data streams.

The embedded device typically contains four modules: the central processing unit and memory, the communication module, the sensor/actuator and the power source. The CPU controls the embedded device: it tells the sensors to capture data, it sends the data to the storage and/or to the communication module which then transmits them to the destination. A sensor is a device that measures physical phenomena and converts them to a signal that can be read by an observer, or, in our context, by a computer. The communication module typically uses wireless transmission (i.e. IEEE 802.15.4). The operation of the embedded device is constrained by the available power.

Conceptualization of the domain

For small and medium size isolated projects it can be relatively straightforward to know which stream of data measures a given property. Traditional database tables can work well in such situations. However, if we are talking about web scale and are aiming for interoperability, some conceptualization of the WoT domain is needed.

Knowledge about sensors needs to be encoded and structured so that it can be used to its full potential. Additional information such as the phenomena they are measuring, the units of measurement, the location of the sensor node, etc. are needed to accompany the numbers. For instance, if we wanted to know the amount of rain, we should be able to recognize that raindrop, rainfall, and precipitation belong to the same physical phenomena and that all such sensors are a good source for our query. If we were interested in the outside temperatures in the morning, we should be able to infer that a sensor node that is positioned in a stable, is not a good source for us, because it is measuring the temperature inside. If we wanted to find out what is the air pressure in our city, we would need the system to be able to tell which geographical coordinates of a sensor node belong to the area (inverse geocoding). The conceptualization of the domain refers to modeling all this knowledge in a standard way. By using standards also interoperability between different systems can be achieved.

Mobile Social Networking   Leave a comment

Internet of Things   Leave a comment

As a growing number of observers realize, one of the most important aspects of the emerging Internet of Things is its incredible breadth and scope. Within a few years, devices on the IoT will vastly outnumber human beings on the planet—and the number of devices will continue to grow. Billions of devices worldwide will form a network unprecedented in history. Devices as varied as soil moisture sensors, street lights, diesel generators, video surveillance systems—even the legendary Internet-enabled toasters—will all be connectedin one fashion or another.

Some pundits have focused only on the myriad addresses necessary for the sheer arithmetic count of devices and have pronounced IPv6 sufficient for the IoT. But this mistakes address space for addressability. No central address repository or existing address translation scheme can possibly deal with the frontier aspects of the IoT. Nor can addresses alone create the costly needed networking “horsepower” within the appliances, sensors, and actuators.

Devices from millions of manufacturers based in hundreds of countries will appear on the IoT (and disappear) completely unpredictably. This creates one of the greatest challenges of the IoT: management. This is a matter both of scope and device capabilities.

These devices incorporate the processors, memory, and human interfaces necessary for traditional networking protocol stacks (typically IPv6 today), the human interfaces necessary for control, and an infrastructure for management (unique addresses, management servers, and so on).

Data exchanged  by Internet of Things Devices

The kinds of information these hundreds of billions of IoT devices exchange will also  be very different from the traditional Internet Much of today’s Internet traffic is primarily human-to-machine oriented. Applications such as e-mail, web browsing, and video streaming consist of relatively large chunks of data generated by machines and consumed by humans.

But the typical IoT data flow will be nearly diametrically opposed to this model. Machine-to-machine communications require minimal packaging and presentation overhead. For example, a moisture sensor in a farmer’s field may have only a single value to send of volumetric water content. It can be communicated in a few characters of data, perhaps with the addition of a location/identification tag. This value might change slowly throughout the day, but the frequency of meaningful updates will be low. Similar terse communication forms can be imagined for millions of other types of IoT sensors and devices. Many of these IoT devices may be simplex or nearly simplex in data flows, simply broadcasting a state or reading over and over while switched on without even the capacity to “listen” for a reply.

This raises another aspect of the typical IoT message: it’s individually unimportant. For simple sensors and state machines, the variations in conditions over time may be small. Thus, any individual transmission from the majority of IoT devices is likely completely uncritical. These messages are being collected and interpreted elsewhere in the network, and a gap in data will simply be ignored or extrapolated

Even more complex devices, such as a remotely monitored diesel generator, should generate little more traffic, again in terse formats unintelligible to humans, but gathered and interpreted by other devices in the IoT. Overall, the meaningful amount of data generated from each IoT device is vanishingly small—nearly exactly the opposite of the trends seen in the traditional Internet. For example, a temperature sensor might generate only a few hundred bytes of useful data per day, about the same as a couple of smartphone text messages. Because of this, very low bandwidth connections might be utilized for savings in cost, battery life, and other factors.

Loss of Data

Today’s traditional Internet is extremely reliable, even if labeled “best effort.” Overprovisioning of bandwidth (for normal situations) and backbone routing diversity have created an expectation of high service levels among Internet users. “Cloud” architectures and the structure of modern business organizations are built on this expectation of
Internet quality and reliability.

But at the extreme edges of the network that will make up the vast statistical majority of the IoT, connections may often be intermittent and inconsistent in quality. Devices may be switched off at times or powered by solar cells with limited battery back-up. Wireless connections may be of low bandwidth or shared among multiple devices.

Traditional protocols such as TCP/IP are designed to deal with lossy and inconsistent connections by resending data. Even though the data flowing to or from any individual IoT device may be exceedingly small, it will grow quite large in aggregate IoT traffic. The inefficiencies of resending vast quantities of mostly individually unimportant data are clearly an unnecessary redundancy.

Posted November 1, 2014 by Anoop George Joseph in Internet

Sensor Technologies   Leave a comment

Sensors play an integral role in numerous modern industrial applications, including food processing and
everyday monitoring of activities such as transport, air quality, medical therapeutics, and many more. While sensors
have been with us for more than a century, modern sensors with integrated information and communications
technology (ICT) capabilities—smart sensors—have been around for little more than three decades. Remarkable
progress has been made in computational capabilities, storage, energy management, and a variety of form factors,
connectivity options, and software development environments. These advances have occurred in parallel to a
significant evolution in sensing capabilities. We have witnessed the emergence of biosensors that are now found in a
variety of consumer products, such as tests for pregnancy, cholesterol, allergies, and fertility.

The development and rapid commercialization of low-cost microelectromechanical systems (MEMS) sensors,
such as 3D accelerometers, has led to their integration into a diverse range of devices extending from cars to
smartphones. Affordable semiconductor sensors have catalyzed new areas of ambient sensing platforms, such as
those for home air-quality monitoring. The diverse range of low-cost sensors fostered the emergence of pervasive
sensing. Sensors and sensor networks can now be worn or integrated into our living environment or even into our
clothing with minimal effect on our daily lives. Data from these sensors promises to support new proactive healthcare
paradigms with early detection of potential issues, for example, heart disease risk (elevated cholesterols levels) liver disease (elevated bilirubin levels in urine), anemia (ferritin levels in blood) and so forth. Sensors are increasingly
used to monitor daily activities, such as exercise with instant access to our performance through smartphones.
The relationship between our well-being and our ambient environment is undergoing significant change. Sensor
technologies now empower ordinary citizens with information about air and water quality and other environmental
issues, such as noise pollution. Sharing and socializing this data online supports the evolving concepts of citizen-led
sensing. As people contribute their data online, crowdsourced maps of parameters such air quality over large
geographical areas can be generated and shared.

Sensors utilize a wide spectrum of transducer and signal transformation approaches with corresponding variations
in technical complexity. These range from relatively simple temperature measurement based on a bimetallic
thermocouple, to the detection of specific bacteria species using sophisticated optical systems. Within the healthcare,
wellness, and environmental domains, there are a variety of sensing approaches, including microelectromechanical
systems (MEMS), optical, mechanical, electrochemical, semiconductor, and biosensing. The the proliferation of sensor-based applications is growing across a range of sensing targets such as air, water, bacteria, movement, and physiology. As with any form of technology, sensors have both strengths and weaknesses. Operational performance may be a function of the transduction method, the deployment environment, or the system components.

Key Sensor Modalities

Each sensor type offers different levels of accuracy, sensitivity, specificity, or ability to operate in different environmental conditions. There are also cost considerations. More expensive sensors typically have more
sophisticated features that generally offer better performance characteristics. Sensors can be used to measure
quantities of interest in three ways:

• Contact: This approach requires physical contact with the quantity of interest. There are many classes to sense in this way—liquids, gases, objects such as the human body, and more. Deployment of such sensors obviously perturbs the state of the sample or subject to some degree. The type and the extent of this impact is application-specific. Let us look at the example of human body-related applications in more detail.

Comfort and biocompatibility are important considerations for on-body contact sensing. For example, sensors can cause issues such as skin irritation when left in contact for extended periods of time. Fouling of the sensor may also be an issue, and methods to minimize these effects are critical for sensors that have to remain in place for long durations. Contact sensors may have restrictions on size and enclosure design. Contact sensing is commonly used in healthcare- and wellness-oriented applications, particularly where physiological measurements are required, such as in electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG). The response time of
contact sensors is determined by the speed at which the quantity of interest is transported to the measurement site. For example, sensors such as ECGs that measure an electrical signal have a very fast response time. In comparison, the response time of galvanic skin response (GSR) is lower as it requires the transport of sweat to an electrode, a slower
process. Contact surface effects, such as the quality of the electrical contact between an electrode and subject’s skin, also play a role. Poor contact can result in signal noise and the introduction of signal artifacts.

On-body contact sensing can be further categorized in terms of the degree of “invasion” or impact. Invasive sensors are those, for example, introduced into human organs through small incisions or into blood vessels, perhaps for in vivo glucose sensing or blood pressure monitoring. Minimally invasive sensing includes patch-type devices on the skin that
monitor interstitial fluids. Non-invasive sensors simply have contact with the body without effect, as with pulse oximetery.
• Noncontact: This form of sensing does not require direct contact with the quantity of interest. This approach has the advantage of minimum perturbation of the subject or sample. It is commonly used in ambient sensing applications—applications based on sensors that are ideally hidden from view and, for example, track daily activities and behaviors of individuals in their own homes. Such applications must have minimum impact on the environment or subject of interest in order to preserve state. Sensors that are used in non-contact modes, passive infrared (PIR) , for example, generally have fast response times.

• Sample removal: This approach involves an invasive collection of a representative sample by a human or automated sampling system. Sample removal commonly occurs in healthcare and environmental applications, to monitor E. coli in water or glucose levels in blood, for example. Such samples may be analyzed using either sensors or laboratory-based analytical instrumentation.

With sensor-based approaches, small, hand-held, perhaps disposable sensors are commonly used, particularly where rapid measurements are required. The sensor is typically in close proximity to the sample collection site, as is the case with a blood glucose sensor. Such  sensors are increasingly being integrated with computing capabilities to provide sophisticated features, such as data processing, presentation, storage, and remote connectivity. Analytical instrumentations, in contrast, generally have no size limitations and typically contain a variety of sophisticated features, such as autocalibration or inter-sample auto-cleaning and regeneration. Sample preparation is normally required before analysis. Some instruments include sample preparation as an integrated capability. Results for nonbiological samples are generally fast and very accurate. Biological analysis, such bacteria detection, is usually
slower taking hours or days.

Posted October 24, 2014 by Anoop George Joseph in Internet

Web Real-Time Communication   Leave a comment

Web Real-Time Communication (WebRTC) is a new standard and industry effort that extends the web browsing model. For the first time, browsers are able to directly exchange real-time media with other browsers in a peer-to-peer fashion.

The World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF) are jointly defining the JavaScript APIs (Application Programming Interfaces), the standard HTML5 tags, and the underlying communication protocols for the setup and management of a reliable communication channel between any pair of nextgeneration
web browsers.

The standardization goal is to define a WebRTC API that enables a web application running on any device, through secure access to the input peripherals (such as webcams and microphones), to exchange real-time media and data with a remote party in a peerto-peer fashion.

Web Architecture

The classic web architecture semantics are based on a client-server paradigm, where browsers send an HTTP (Hypertext Transfer Protocol) request for content to the web server, which replies with a response containing the information requested.

The resources provided by a server are closely associated with an entity known by a URI (Uniform Resource Identifier) or URL (Uniform Resource Locator).

In the web application scenario, the server can embed some JavaScript code in the HTML page it sends back to the client. Such code can interact with browsers through standard JavaScript APIs and with users through the user interface.

WebRTC Architecture

WebRTC extends the client-server semantics by introducing a peer-to-peer communication paradigm between browsers.

In the WebRTC Trapezoid model, both browsers are running a web application, which is downloaded from a different web server. Signaling messages are used to set up and terminate communications. They are transported by the HTTP or WebSocket protocol via web servers that can modify, translate, or manage them as needed. It is worth noting that the signaling between browser and server is not standardized in WebRTC, as it is considered to be part of the application. As to the data path, a PeerConnection allows media to flow directly between browsers without any intervening servers. The two web servers can communicate using a standard signaling protocol such as SIP or Jingle (XEP-0166). Otherwise, they can use a proprietary signaling protocol.

The most common WebRTC scenario is likely to be the one where both browsers are running the same web application, downloaded from the same web page.

WebRTC in the browser

A WebRTC web application (typically written as a mix of HTML and JavaScript) interacts with web browsers through the standardized WebRTC API, allowing it to properlyexploit and control the real-time browser function. The WebRTC web
application also interacts with the browser, using both WebRTC and other standardized APIs, both proactively (e.g., to query browser capabilities) and reactively (e.g., to receive browser-generated notifications).

The WebRTC API must therefore provide a wide set of functions, like connection management (in a peer-to-peer fashion), encoding/decoding capabilities negotiation, selection and control, media control, firewall and NAT element traversal, etc.

Let us imagine a real-time audio and video call between two browsers. Communication, in such a scenario, might involve direct media streams between the two browsers, with the media path negotiated and instantiated through a complex sequence of interactions involving the following entities:

• The caller browser and the caller JavaScript application (e.g., through the mentioned JavaScript API)
• The caller JavaScript application and the application provider (typically, a web server)
• The application provider and the callee JavaScript application
• The callee JavaScript application and the callee browser (again through the application-browser JavaScript API)

Posted October 23, 2014 by Anoop George Joseph in Internet

Follow

Get every new post delivered to your Inbox.

Join 820 other followers

%d bloggers like this: