Free Essay

Voice Chat System

In:

Submitted By Kingjuli
Words 5465
Pages 22
Voice Chat & Video Conferencing

INDEX

Title of Project……………………………………………………. i Certificate………………………………………………………… ii Acknowledgements……………………………………………… iii

1 Introduction

2 Design & Implementation 2.1 Why Java Platform 2.2 Java Media Framework Streaming Media Architecture Presentation Processing Capture Media Data Storage & Transmission 2.3 Real-time Transport Protocol Streaming Media RTP Architecture RTP Application 2.4 JMF RTP Application Programming Interface RTP API Architecture Reception Transmission 2.5 Requirements & Specification Feasibility Analysis Requirements Functionality Specification 2.6 Diagrams

3 Results 3.1 Snapshots 3.2 Future Offshoots 3.3 Conclusion

References

CHAPTER

1

INTRODUCTION

Introduction

To sum up, the at most priority must be given to the definition of the problem statement of the project to be undertaken. So the project is a set of file related with each other developed by the creators of the system to accomplish a specific purpose. Our purpose is to design project for “VOICE CHATTING & VIDEO CONFERENCING”.

As the name suggests, our project is based on audio/video transmission & reception. Through our application two or more persons in an intranet can Chat with one another & they can have Video Conferencing also. It is a Client-Server type application in which the Server handles all the traffic. The person (from one of the computer in the network) who wants to have chat or conferencing with another person requests to Server & after acceptance of request they can have successful chat or conferencing. The Server (which is a person indeed) can also have Voice Chatting or Conferencing with the clients.

Our application is programmed in Java programming language. The other tools that we’ve used to build our application are JDK1.6 (Java Development Kit) , JMF 2.0 (Java Media Framework) and RTP (Real-time transport Protocol).

JMF is a package that is used to develop softwares related to audio & video. It enables to capture media data (audio/video) & to transmit to target device. RTP is the protocol designed to handle Real-Time traffic on the intranet/internet that lies between UDP & application program used with UDP.

Chapter

2

Design
&
Implementation

2.1 Why Java platform

Java is ideally suited to become the standard application development language for wireless devices, providing us with lots of benefits. Hereto some of the most important ones:

Cross platform compatibility The Java application can easily transfer between different devices and different platforms as long as the JVM has been developed for those devices.

Object Oriented Programming Java has a better abstraction mechanisms and higher level programming constructs than C++.

Huge java developer community Java has become the most popular programming language taught in schools and universities.

Security Java is known for its security features (class file verification, cryptography possibilities etc...)

Dynamic Java classes can be easily downloaded dynamically over the network, and easily integrated with the running application.

2.2 Java Media Framework

2.2.1 Streaming Media

[pic]

Media processing model

Any data that changes meaningfully with respect to time can be characterized as time based media. Audio-clips, movie clips and animation are common form of time-based media. They can be obtained from various sources like network files, camera, microphones, & live broadcasts.

Time-based media is also referred as streaming media- it is delivered in a steady stream that must be received & processed within a particular time frame to produce acceptable results. Media data is in media streams that are obtained from a local file, acquired over network or captured from a camera or microphone.

[pic]

Common Video formats

[pic]

Common Audio formats

These time-based media is represented through o/p devices like speakers & monitors. An output destination for media data is referred as a Data-Sink. In many cases, the presentation of the media stream can’t begin immediately. This is latency experienced before begin of presentation. The data in a media stream is manipulated before it is presented to the user.

Time-based media can be captured from a live source for processing & playback. For this we need capture devices. Captures devices can be characterized as either push or pull source. A still camera is a pull source- the user controls when to capture an image. A microphone is a push source-the live source continuously provides a stream of audio. Java Media framework (JMF) provides architecture & messaging protocol for managing the acquisition, processing and delivery of time-based media data. JMF is designed to support most standard media content types such as AIFF, AVI, GSM, MIDI, MPEG, WAV etc. JMF implementations can leverage the capabilities of the underlying operating system, while developers can create programs that feature time-based media by writing to the JMF API. With JMF, developers can easily create applets & applications that present, capture, manipulate and store time-based media.

2.2.2 Architecture

[pic] High-level JMF Architecture

When you play a movie using VCR, you provide the media stream to the VCR by inserting a video tape. The VCR reads & interprets the data & sends appropriate signals to TV and speakers.

[pic]

Recording, Processing and presenting time-based media.

JMF uses this same basic model. A data source encapsulates the media stream much like a video tape and a player provides processing & control mechanism similar to VCR. Playing and capturing audio and video with JMF requires input and output devices.

Time Model

JMF keeps time to nanosecond precision. A particular point in time is typically represented by Time object. Classes that support JMF time model implement Clock to keep track of time for particular media stream. This interface defines the basic timing and synchronization operations that are needed to control the presentation of media data. A Clock uses a TimeBase to keep track of the passage of time while a media stream is being presented. To keep track of current media time, a Clock uses: ■ The time-base start time ( the time that its TimeBase reports when the presentation begins. ■ The media start time ( the position in the media steam where presentation begins. ■ The playback rate ( how fast the Clock is running in relation to its TimeBase. The rate is a scale factor that is applied to the TimeBase.
The current media time is calculated as follows:
MediaTime =MediaStartTime + Rate (TimeBaseTime – TimeBaseStartTime)

Managers

The JMF API consists of interfaces that define behavior and interaction of objects. By using intermediary objects called managers, JMF makes it easy to integrate new implementations of key interfaces that can be used with existing classes. There are four managers: Manager ( handles the construction of Players, Processors, DataSources and DataSinks. PackageManager ( maintains a registry of packages that contain JMF classes. CaptureDeviceManager ( maintains a registry of available capture devices. PlugInManager ( maintains a registry of available JMF Plug-In processing components.

If you extend JMF functionality by implementing a new plug-in, you can register it with the PlugInManager to make it available to Processors that support the plug-in API.

Event Model

JMF uses a structured event reporting mechanism to keep JMF-based programs informed of the current state of the media system and enable JMF-based programs to respond to media-driven error conditions such as out-of data and resource unavailable conditions. Whenever a JMF object needs to report on the current conditions, it posts a MediaEvent. MediaEvent is subclassed to identify many types of events.

Data Model

JMF media players use DataSources to manage the transfer of media-content. A DataSource encapsulates both the location of media and the protocol and s/w used to deliver the media. A DataSource is identified by either a JMF MediaLocator or a URL. JMF defines several types of DataSource objects categorized according to how data transfer is initiated: Pull Data-Source ( the client initiates the data transfer and controls the flow of data from pull data-sources. There are 2 types of pull Data-Sources: PullDataSource & PullBufferDataSource, which uses a Buffer object as its unit of transfer.

Push Data-Source ( the server initiates the data transfer and controls the flow of data from push data-sources. Push data-source include broadcast media, multicast media and Video on Demand. There are 2 types push data sources: PushDataSource & PushBufferDataSource, which uses a Buffer object as its unit of transfer.

JMF defines two types of specialty data sources, cloneable data sources and merging data sources.

A Cloneable data source can be used to create clones of either a pull or push DataSource. The clones don’t necessarily have the same properties as the cloneable data source used to create them or the original DataSource.

A MergingDataSource can be used to combine the SourceStreams from several DataSources into a single DataSource. This enables a set DataSources to be managed from a single point of control.

Control

[pic]

JMF Controls

Objects that have a Format control can implement the FormatControl interface to provide access to the Format & also provides methods to querying & setting the format. One of the FormatControl is TrackControl provides mechanism for controlling what processing a Processor object performs on a particular track of media data. Through it, we can specify what format conversions are performed in individual tracks & select Effect, Codec & Renderer plug-ins used by the Processor. Other controls such as PortControl (which defines methods for controlling o/p of a capture device) & Monitor Control (which enables media data (captured or encoded) to be previewed) enable user control over the capture process. BufferControl enables user-level control over the buffering done by a particular object.

JMF Control provides mechanism for setting and querying attributes of an object. A Control often provides access to a corresponding user interfaces component that enable user control over an object’s attributes. They define methods for retrieving associated Control objects. DataSource and PlugIn use the Controls interface to provide access to their Control objects.

JMF also has several codec controls to enable control over hardware or software encoders and decoders. A control provides access to a Component that exposes its control behavior to the end user. If you don’t want to use the default control components provided by a particular implementation, you can implement your own and use the event listener mechanism to determine when they need to be updated.

2.2.3 Presentation

The presentation process is modeled by the Controller interface. It defines basic state and control mechanism for an object that controls presents or captures time-based media. It defines phases that a media controller goes through and provides mechanism for controlling the transitions between those phases. The JMF API has two types of Controllers: Players and Processors. They are constructed for a data source and not reused to present other media data.

Players

[pic]
JMF Player Model

A Player processes an input stream and renders it at precise time. A DataSource is used to deliver input media stream to the Player. The rendering destination depends on the type of media being presented. A Player does not provide any control over the processing that it performs how it renders the media data.
[pic]

Player States.

RCE – RealizedCompleteEvent PFCE – PrefetchCompleteEvent SE - StopEvent

A Player can be in one of the six states. The Clock interface defines the two primary states: Stopped & Started. To facilitate resource management, Controller breaks down the stopped state into five standby states: Unrealized, Realizing, Realized, Prefetching and Prefetched.

Let us look at each states:
A Player in Unrealized state has been instantiated but does not yet know anything about its media.
When realize is called, a Player moves to the Realizing state. It is in process of determining its resource requirements. This might include rendering resources other than exclusive-use resource.
After that, Player moves to Realized state. In this state, it knows what resources it needs and information about type media it is to present.
When prefetch is called, Player enters into Prefetching state. Here, it preloads its media data, obtains exclusive-use resources and does whatever else it needs to do to prepare itself to play.
When a Player finishes Prefetching, it enters into the Prefetched state. A Prefetched Player is ready to be started.
Calling start puts Player into Started state. A Started Player object’s time-base time and media time are mapped and its clock is running.

A Player posts TransitionEvent as it moves from one state to another. The ControllerListener interface provides a way for your program to determine what state a Player is in and to respond appropriately. Using this event reporting mechanism, you can manage a Player object’s start latency by controlling when it begin Realizing and Prefetching.

Processors

[pic]

JMF Processor model

Processor can also be used to present media data. A Processor is just a specialized type of Player that provides control over what processing is performed on the input media stream. It supports all of the same presentation controls as a Player. A Processor can output media data through a DataSource so that it can be presented by another Player or Processor, further manipulated by another Processor or delivered to some other destination, such as file. Additional custom Control types might be supported by a particular Player or Processor implementation to provide other control behaviors and expose custom user interface components. A Player or Processor generally provides two standard user interface components, a visual component and a control-panel component. A Processor allows application developer to define the type of the processing that is applied to the media data. This enables the application of effects, mixing and composing in real-time.

2.2.4 Processing

The processing of the media data is split into several stages:

[pic]

Processor Stages

Demultiplexing is the process of parsing the input stream. If the stream contains multiple tracks, they are extracted and output separately.
Pre-processing is the process of applying effect algorithm to the extracted from the input stream.
Transcoding is the process of converting each track from one input format to another.
Post-Processing is the process of applying effect algorithms to decoded tracks.
Multiplexing is the process of interleaving the transcoded media tracks into a single output stream. E.g. separate audio & video tracks might be multiplexed into single MPEG-1 data stream.
Rendering is the process of presenting the media to the user.

The Processing at each stage is done by separate processing component which are plug-ins. There are five types of JMF plug-ins:

Demultiplexer ( parses media streams such as WAV, MPEG, etc.
Effect ( performs special effect processing on a track.
Codec ( performs data encoding and decoding.
Multiplexer ( combines multiple tracks of input data into a single interleaved output stream.
Renderer ( processes the media data and delivers it to a destination.

Processor states

[pic]

Processor States

CCE – ConfigureCompleteEvent

A Processor has two additional standby states, Configuring and Configured occur before the realizing state.

A Processor enters Configuring state when configure is called. Here it connects to the DataSource, demultiplexes the input stream and accesses information about the format of input data.
A Processor enters into Configured state when it is connected to the DataSource and data format has been determined.
When the Realize is called, the Processor is transitioned to the Realized state. Once the Processor is Realized it is fully constructed.

2.2.5 Capture

A multimedia capturing device can act as a source for multimedia data delivery. E.g. a microphone can capture raw audio input / a digital video capture board might deliver digital video from camera. Such Captured devices are abstracted as DataSource. E.g. a device that provides timely delivery of data can be represented as PushDataSource. Some devices deliver multiple data streams. The corresponding DataSource can contain multiple SourceStreams that map to the data streams provided by the device.

2.2.6 Media Data Storage & Transmission

A DataSink is used to read media data from a DataSource and render the media to some destination. A particular Datasink might write data to a file, write data across the network, or function as an RTP broadcaster. Like Players, DataSink objects are constructed through Manager using a DataSource. A DataSink can use a StreamWriterControl to provide additional control over how data is written to the file.

2.3 Real-time Transport Protocol

2.3.1 Streaming Media

When media content is streamed to a client in real-time, the client can begin to play the stream without having to wait for the complete stream to download. The term streaming media is often used to both technique of delivering content over the network in real-time and the real-time media content that is delivered. Through this, streaming media is changing the way people communicate and access information.

Transmitting media data across the network in real-time requires high network throughput. It is easier to compensate for lost data than to compensate for large delays in receiving the data. Consequently, the protocols used for static data such as TCP don’t work well for streaming media.

So, underlying protocols other than TCP are typically used. Such a protocol is User Datagram Protocol (UDP). UDP is a general transport layer unreliable protocol. It’s a low level networking protocol on top of which more application specific protocols are built. The Internet standard for transporting real-time data (audio-video) is the Real-time Transport Protocol (RTP).
2.3.2 RTP Architecture

[pic]

RTP architecture

RTP provides end-to-end network delivery services for the transmission if real-time data. RTP can be used over both unicast and multicast network services. Over unicast, separate copies of the data are sent from the source to each destination. Over multicast, the data is sent from the source only once and the network is responsible fro transmitting the data to multiple locations. This is more efficient for multimedia application such as video conferences. RTP enable you to identify the type of data being transmitted, determine what order the packets of data should be presented in and synchronize media streams from different sources. RTP data packets are not guaranteed arrive in the order they were sent and not guaranteed to arrive at all. This is augmented by a control protocol (RTCP) that enables you to monitor the quality of data distribution & provides control and identification mechanism for RTP transmissions.

An RTP session is an association among set of applications communicating with RTP. A session is identified by a network address and a pair of ports. One port is used for media data and other port is used for control data. A participant is a single machine, host or user participating in the session. Each media type is transmitted in different session. E.g. if both audio & video are used in conference, one session is for the audio and second session is used to transmit video data. This enables participants to choose which media types they want to receive.

Data packets

[pic] RTP data packet header format The media data for a session is transmitted as a series of packets known as RTP stream. Each data packet in a stream contains two parts, a structure header & the actual data. The header of an RTP data packet contains:
The RTP version number (V): 2 bits. The version defined by the current specification is 2.
Padding (P): 1 bit. If the padding bit is set, there are one or more bytes at the end of the packet that are not part of the payload. The very last byte in the packet indicates the number of bytes of padding. It is used by some encryption algorithms.
Extension (X): 1 bit. If the extension bit is set, the fixed header is followed by one header extension. This extension mechanism enables implementations to add information to the RTP Header.
CSRC Count (CC): 4 bits. The number of CSRC identifiers that follow the fixed header. If the CSRC count is zero, the synchronization source is the source of the payload.
Marker (M): 1 bit. A marker bit defined by the particular media profile.
Payload Type (PT): 7 bits. An index into a media profile table that describes the payload format. The payload mappings for audio and video are specified in RFC 1890.
Sequence Number: 16 bits. A unique packet number that identifies this packetÕs position in the sequence of packets. The packet number is incremented by one for each packet sent.
Timestamp: 32 bits. Reflects the sampling instant of the first byte in the payload. Several consecutive packets can have the same timestamp if they are logically generated at the same time N for example, if they are all part of the same video frame.
SSRC: 32 bits. Identifies the synchronization source. If the CSRC count is zero, the payload source is the synchronization source. If the CSRC count is nonzero, the SSRC identifies the mixer.
CSRC: 32 bits each. Identifies the contributing sources for the payload. The number of contributing sources is indicated by the CSRC count field; there can be up to 16 contributing sources. If there are multiple contributing sources, the payload is the mixed data from those sources.

Control Packets

Control data packets (RTCP packets) are periodically sent to all of participants in the session. They can contain information about the quality of service for the session participants, the source of the media and statistics pertaining to the data that has been transmitted so far. There are several types of RTCP packets:

Sender Report ( A participant that has recently sent data packets issues a Sender Report that contains the total number of packets and bytes sent & other information used to synchronize media streams from different sessions. Receiver Report ( A participant periodically issues Receiver Reports for all sources from which they are receiving data packets. It contains information about number of packets lost, the highest sequence number received and a timestamp used to estimate round trip delay. Source Description ( all compound RTCP packets must include a source description (SDES) element that contains Canonical name (CNAME) that identifies the source. Additional information might be included. BYE ( When a source is no longer active, it sends an RTCP BYE packet. It may include the reason that the source is leaving the session. Application Specific ( RTCP APP packets provide mechanism for applications to define and send custom information via RTP control port.

2.3.3 RTP Applications

RTP applications are often divided into those that need to able to receive data from the network (RTP Clients) and those that need to be able to transmit data across the network (RTP Servers). Some applications do both.

2.4 JMF RTP API

2.4.1 RTP API Architecture

[pic]

High-level JMF RTP architecture

JMF enables the playback & transmission of RTP streams through the APIs defined in the javax.media.rtp, javax.media.rtp.event and javax.media.trp.rtcp packages. JMF RTP APIs are designed to work with the capture, presentation & processing capabilities of JMF. Players & Processors are used to present and manipulate RTP media streams. JMF can be extended to support additional RTP-specific formats and dynamic payloads through the standard plug-in mechanism.

[pic]

RTP reception

User can play incoming RTP streams locally, save them to a file or both. Similarly they can use APIs to transmit captured or stored media streams across network. The outgoing streams can also be played locally, saves to a file or both.

[pic] RTP transmission

Session Manager

In JMF, a SessionManger is used to coordinate an RTP session. The session manager keeps track of the session participants and the streams that are being transmitted. It maintains the state of the session as viewed from the local participant. It also handles the RTCP control channel and supports RTCP for both sender & receiver. The SessionManager interface defines methods that enable an application to initialize & start participating in session, remove individual streams created by the application and close the entire session.

Session Statistics

The session manager maintains statistics for all of the RTP & RTCP packets and received in the session. It provides access to global reception and transmission statistics: GlobalReceptionStats- maintains global reception statistics for the session. & GlobalTransmissionStats- maintains cumulative transmission statistics for all local senders. Statistics for a particular recipient are: ReceptionStats- maintains source reception statistics for an individual participant. TransmissionStats- maintains transmission statistics for an individual send stream.

Session Participants

Each participant is represented by an instance of a class that implements the Participant interface. Participants can be passive or active. There is exactly one local participant that represents the local client/server participant. A participant can own more than one stream, each of which is identified by the synchronization source identifier SSRC) used by the source of stream.

Session Streams

For each stream of RTP data packets there is an RTPStream object. There are 2 types of RTPStream: ReceiveStream represents a stream that’s being received from a remote participant. SendStream represents a stream of data coming from Processor or input DataSource that is being sent over the network.

RTP Events

RTP-specific events are used to report on the state of the RTP session and streams. To receive notification of RTP events, you need to implement the appropriate listener & register it with the session manager: SessionListener: Receive notification of changes in the state of the session. SessionStreamListener: Receives notification of changes on the state of an RTP stream that is transmitted. ReceiveStreamListener: Receives notification of changes on the state of an RTP stream that is received. RemoteListener: Receives notification of events or RTP control messages received from a remote participant.

RTP Data

Data Handlers

The JMF RTP APIs are designed to be transport-protocol independent. A custom RTP Data Handler can be created to work over a specific protocol. The RTPPushDataSource class defines the basic elements of JMF RTP data handler. It has both input data stream and output data stream & is used for the data channel or control channel of an RTP session. A custom RTPSocket can be used to construct Player through the Manager. JMF defines name & location for custom RTPSocket implementation: .media.protocol.rtpraw.Datasource

Data Formats

All RTP specific data uses an RTP specific format encoding as defined in the AudioFormat & VideoFromat classes.

AudioFormat define 4 standard RTP encoding strings:

public static final String ULAW_RTP = “JAUDIO_G711_ULAW/rtp”; public static final String DVI_RTP = “dvi/rtp”; public static final String G723_RTP = “g723/rtp”; public static final String GSM_RTP = “gsm/rtp”; VideoFormat defines 3 standard RTP encoding strings: public static final String JPEG_RTP = “jpeg/rtp”; public static final String H261_RTP = “h261/rtp”; public static final String H263_RTP = “h263/rtp”;

RTP Controls

RTP API defines one RTP-specific control, RTPControl. RTPControl provides a mechanism to add a mapping between a dynamic payload and a Format and methods for accessing session statistics and getting the current payload Format.

2.4.2 Reception

The presentation of a RTP stream is handled by a Player. A MediaLocator is used to construct Player which has form: rtp://address:port[:ssrc]/content-type/[ttl]

The Player is constructed & connected to the first stream in the session. If there are multiple streams in the session, session manager is required. User can receive notification from the session manager whenever a stream is added to the session and construct a Player for each new stream.

2.4.3 Transmission

A session manager can also be used to initialize and control a session so that you can stream data across the network. The data to be streamed is acquired from a Processor. For example, to create a send stream to transmit data from a live capture source, you would: 1. Create, initialize, and start a SessionManager for the session. 2. Construct a Processor using the appropriate capture DataSource 3. Set the output format of the Processor to an RTP-specific format. An appropriate RTP packetizer codec must be available for the data format you want to transmit. 4. Retrieve the output DataSource from the Processor. 5. Call createSendStream on the session manager and pass in the DataSource. Transmission is controlled through the SendSteram start & stop methods. When it is first started, the SessionManager behaves as a receiver. As soon as SendStream is created, it begins to send out RTCP sender reports & behaves as a sender host as long as one or more send streams exist. If all SendStream are closed, it becomes a passive receiver.
2.5 Requirements & Specification

2.5.1 Feasibility Analysis

Technical Feasibility This project doesn’t required advance and higher technology. It requires only knowledge of Core Java and Java Media Framework (JMF). It requires only knowledge of java.

Economical Feasibility The development of project want cost too much as it requires microphone & Web Camera as extra hardware besides computer. It involves very few persons and it does not require any outside professionals. As project is based on java so we don’t have any extra cost of setting up a network. The tools that we have used are freeware and can be downloaded from internet.

Operational Feasibility This project is live project. Person using these applications does not require extra technical or computer skill. It can be easily operated.

Application

❖ To make application satisfying customer request. ❖ To provide good user interface. ❖ To provide better way of communication.

Application Installation Process

Install audio drivers for Voice Chat.

Install Web Camera for Video Conferencing.

Install Java Development Kit v 1.6.

Install Java Media Framework.

After successful installation execute the application.

2.5 Diagrams

User Info

New

Login name Password

Password

Data Flow Diagram

[pic]

During initialization

[pic]

Login Request sent by first Client

[pic]

Request sent by Client except the first one

[pic]

Voice Chat / Video Conference Request & Acceptance

[pic]

A Client ends Chat/Conference

[pic]

A Client Logs out

[pic]

New User Creation

[pic]

User changes Password

Chapter

3

Results

3.1 Snapshots

[pic]

Server Initialization

[pic]
User Creation

[pic]

Creation Success

[pic]

Password Change

[pic]

Login Request

[pic]

Server Entry

[pic]

Server Entry for New Login

[pic]

Login Form

[pic]

Chat & Conferencing Form

[pic]

Chat Form of different User
(Difference in port Number)
When Chat or Conferencing is happening

[pic]

Chat Form after end
[pic]

Log out request

[pic]

Server Closing

Future Offshoots

Chat Rooms

Due to time limitation we have just use a single chat room. The application can be expanded to include more chat rooms. So the clients can chat or conference in particular chat-room they intend. & each client have list of logged clients for the room where he is currently doing activities.

Voice-mail

We can introduce a new feature called Voice-mail in the application. Voice-mail is a facility which can be used by a client to send voice-message to a target client which was in off-line at the time when the message was sent. The target client can listen voice-messages sent by different clients whenever it logs in.

Video-mail

We can have another interesting feature known as Video-mail. In Video-mail a client will record a video-clip & send to the recipient client which is in off-line state. The recipient client will see the video-messages when it logs in.

File Transfer

We want to give facility of transferring files other than between the clients. The files may be a text document, worksheets and other types of files. Various clients that are having voice or video conversation can send & receive files to one another.

Conclusion

We have described our experiments as an application for audio/video chat based on Java platform using tools Java Media Framework & Real-time Transport protocol.

Due to unavailability of time we have only implemented Voice Chat & Video Conferencing which helps the user to converse with his loved ones, friends & others.

However this is not the end. In feature, more application can be developed in the field of audio and video reception & transmission on the Java platforms or other platforms.

References

Books:

Complete Reference Java 2 by Herbet Shieldt

Java – How To Program by H. M. Dietel & P. J. Dietel

Java Media Framework API Guide 2.0

Websites

1. www.java.com

2. www.sun.com

3. www.freesourcecode.com

Tools Used

Java Development Kit v1.6

Java Media Framework v2.0

LiknoWebButtonMakers

-----------------------

Sign In

New User

Chat or Conferencing

Log Out

Update
Password

Login

Server

Data
Base

Get User info

Request Accept
Changes Status to 1

Login Request
Client ID, password

Client
(Server IP adr,
Server Port)

Server
(Array Consisting
Client ID, password & status field)

Request Accept
Changes Status to 1
List of logged clients

Login Request
Client ID, password

Client N
(Server IP adr,
Server Port)

Server
(Array Consisting
Client ID, password & status field)

Client
( 1 to N-1)
Receives entry for newly logged client

Client

Client

Request for Voice Chat/ Video Conference

Request Accept

Client

Client

Disconnection
End Chat/Conference

Logs Out

Client

Server

Entry for exiting Client is removed from all Logged Clients’ list as well from Server

Client
Enters old & new password

New Client enters login name & password

Login Name Exists?

Enter into Array

Enter into Database

N

User

Report Client about Existence

Y

Server

Two password entries match?

Update the Database

Y

Inform Client to enter password again

Similar Documents

Premium Essay

Ps3 vs Xbox 360

...PS3 vs. Xbox 360 These two gaming platforms have been going at it for years now. It’s time to settle which one is the better system. The Sony PlayStation franchise was the first to release in 1994. Microsoft finally released their Xbox in 2001. However I’m not comparing the first Xbox to the first PlayStation I will be comparing the PlayStation 3 and Xbox 360.  Starting with features they both have, the ps3 and 360 have, of course, exclusive games that can only be played on that specific system. For example halo is for the 360 and god of war is for ps3. They both have an internet browser and apps; they are Wi-Fi enabled meaning they can detect wireless internet. Through Xbox live and PlayStation network, they allow you to play with friends and meet people all over the world (the name tells what system has what). Now the PS3 and the Xbox 360 are generally made for the same they can do different things that set them apart. The ps3 has a Xross media Bar (XMB) which allows you to chat with friends while pausing the action in-game. This comes in handy when people message you while you’re in a good game and do not want to interrupt it. While playing users can earn trophies which so how bad ass you are. The trophies are ranked as follows: Bronze, silver, gold and platinum which can only be attain when you achieve all other trophies for the game. Keep in mind that some trophies are easier to get than others. While in-game the XMB can be used to invite players to your gaming session...

Words: 860 - Pages: 4

Premium Essay

Data Communication (Basic Knowledge-- Semi-Illustrated)

...Data Communication Presentation Data communication is the transmission of data from one location to another for direct use or further processing. Wireless communication offer organizations and users many benefits, such as portability, flexibility, increased productivity and lower instillation cost. Wireless technologies, in the simplest sense, enable one or more devices to communication with each other without physical connections without cabling. Half Duplex Duplex Simplex Asynchronous Synchronous  A half duplex line that can send and receive data but not simultaneously.  A full duplex line can send and receive data simultaneously.  A simplex line permits data to flow n only one direction. You can send data or receive it or both.  Asynchronous communication is the transmission of data without the use of an external clock signal, where data can be transmitted intermittently rather than in a steady stream.  the sender and receiver must synchronize with one another before data is sent. To maintain clock synchronization over long periods, a special bit-transition pattern is embedded in the digital signal that assists in maintaining the timing between sender and receiver. What is a Network?  A network is a group of two or more computers linked together so that they can share resources such as hardware, software and data and can communicate with each other. 1. 2. 3. 4. 5. ...

Words: 1421 - Pages: 6

Free Essay

Sharepoint

...Research Paper: Enterprise Collaboration Systems Introduction Enterprise Collaboration Systems or ECSs are systems that create team and workgroup collaboration. They enhance communications, productivity and provide support in business operations. Some basic examples of ECSs are e-mail, chat, and videoconferencing (O’Brien & Marakas, 2011). As we dive into this research paper more detailed explanations will be made of what the systems are and how they help organizations collaborate and enhance quality of work. ECSs also include applications that are sometimes called office automation systems, which are systems that create workflows to get rid of paper and create a smoother experience for users (O’Brien & Marakas, 2011). Organizations have many systems and ECSs are not ones to be left out. Information systems perform three vital roles in business firms. They support organization’s business processes and operations, give users more valuable information to help with good business decision making, and create strategic competitive advantages. Information technology, with the help of the Internet, provides us with the avenues to communicate ideas, share resources, and coordinate our cooperative work efforts. The goal of ECSs are to enable us to work together easily and effectively by helping us to, communicate, by sharing information with each other, coordinate, by helping us organize our work efforts and use of resources, and collaborate, by helping us work together cooperatively...

Words: 3522 - Pages: 15

Free Essay

Netw250 Pbx-Acd Analysis

...customers go away happy. A conventional “Your call is important to us…” call | |center—in today’s age of Internet gratification—will likely just make them go away. | | | |Take a conventional call center, put it over an IP-telephony infrastructure, plug in a multimedia server or two and upgrade the agent PC | |client software, and presto—you’ve got all the makings of an IP contact center. It’s not quite that easy, of course. Contact centers—so | |called because they add customer-interaction channels beyond just voice—represent considerable added complexity, and cost, over voice-only | |call centers. | | | |After listening to years of vendor claims about their nouveau contact centers—ROI, ease of use, functional one-upmanship—BCR and Miercom | |decided to see for themselves. Invitations were issued to all vendors known to us...

Words: 5971 - Pages: 24

Free Essay

Unified Communication

...accomplishes everything you need it to. The five factor when looking into a unified platform are: Functionality, flexibility, architecture, security and usability. There are many platforms to choose from for instance Skype and or FaceTime are great options. Skype is an application that allows you to video chat, voice chat or text another person from anywhere in the world. Skype can also be used on any computer/laptop or your smart phone. Skype is a very good application to use if you're missing your family or friends or if you need to attend a conference call but are unable to travel. Skype always gives you clear and crisp video and voice so no matter how far apart you are from one another, you will be able to hear and see them. Skype has many "advanced" features to it too, one of the more interesting ones, in my opinion, is that you can call someones land line and speak with them while you are on Skype. You can also set up Skype Voice mails. Skype is kind of like a phone on your computer. Skype is a great application for either home use or office use. FaceTime is another option as you can use it on all apple products. FaceTime is not yet available in the android systems but I feel that it has a better connection than Skype. However the downside is that they don't offer as many functions as Skype does. They are both considered VoIP but one application is less advanced. FaceTime would be a better option for communication with friends and family not necessarily for a professional...

Words: 313 - Pages: 2

Free Essay

Adobe Connect Benefits

...conferencing, application sharing, live polling, chat, whiteboards, and power point presentations with small or large groups by only connecting to a website on your computer. * Using Adobe Connect, you and other meeting attendees can join a live, on-line meeting from anywhere in the world, as long as you have a browser, Flash Player plug-in, and an Internet connection. A meeting can have as few as two or as many as several hundred attendees. Benefits of using Adobe Connect * Save real money by reducing expensive travel and training costs at the same time as improving productivity and the learning experience. * Increase student enrollment rates * Being able to record lectures for on demand playback. Example would be watching a past lecture to help study for test. Companies that use Adobe Connect * Military, Colleges, Hospital even the president of the United states Reach out to colleagues around the world and across platforms Adobe® Connect™ software eliminates technical barriers between meeting participants. Not only is it designed to work on a variety of platforms, it also leverages the power of Adobe Flash® Player software. With Flash Player already installed on over 98% of Internet-connected PCs, users can instantly access online meetings, virtual classrooms, and presentations without cumbersome downloads and IT support. And since Adobe Connect is engineered to work on a variety of operating systems — including Microsoft® Windows®, Mac OS, Linux®...

Words: 1261 - Pages: 6

Premium Essay

Nt1310 Unit 4 Gcheal Devices

...In this subject I would like to recommend Logitech wireless mouse M310 because its body is made up of molded plastic and around the top is rubberized with soft plastic around the sides and base for a comfortable grip. It has no programmable buttons or fancy contoured grips here.It has 2.4 GHz wireless connection and 18 months of long lasting battery life, so this mouse will be well suited for this online conference meeting. Keyboard Keyboard simply mean a keypad device with buttons or keys that a user presses to enter data, characters and commands into a computer. It can also be define as one of the input device which is use to feed data into the computer. It is the most data entry device in a computer system. All the keyboards in the English speaking world use the traditional QWERTY layout for character keys. Keyboard emerged from the combination of typewriter and computer terminal...

Words: 869 - Pages: 4

Free Essay

Texting While Driving: We'Re Really Bad at It, but We Think We'Re Good

...surprise—but they also rated their performance better. "Many people have this overconfidence in how well they can multitask, and our study shows that this particularly is the case when they combine two visual tasks," says Zheng Wang, lead author on the study and assistant professor of communication at Ohio State. The results are in line with previous work that suggest that drivers overestimate their ability to multitask. The participants attempting two visual tasks had to complete a pattern-matching puzzle on a computer screen while typing walking directions to a person over Google Chat, an instant-messaging (IM) system. Students were told the person needing directions was another student named Jennifer. The students tasked with the combined audio and visual tasks had to complete the same computer puzzle while verbally giving directions to Jennifer using Google Talk, a voice chat system. The students were also asked to do the computer puzzle with no distractions. The two multitasking scenarios can be compared to what drivers face, says Wang. People who text while driving are combining two mostly visual tasks, and people who talk while driving are combining audio and visual tasks. The results showed that multitasking of either kind—audio or visual—seriously hurt performance. Participants who gave audio directions showed a 30 percent drop in performance on the pattern-matching puzzle. But those who typed directions did even worse: a 50...

Words: 424 - Pages: 2

Free Essay

Research Paper

...headed by Mrs. Carolina C. Sanchez. Course offered under College of Computer Studies are BSIT, Computer Programming, Computer Secretary, Computer Technician. The instructors/professors under this department namely; Mr. Ronaldo Bayani, Mr. Renmark Salalila, Mr. Christopher Tulabut, Mr. Charlie Tullao, Ms. Raechel Ann D. Capiz & Mrs. Jane Diaz 2.2 Overview of the Current State of Technology 2.3.1 Policies and Procedures * Announcement of Events CCS Instructors makes an announcement to students. * File Sharing CCS Instructors distribute handouts to students. 2.3.2 Problems of the Current System * Not every student knows the announcements because some of them are absent. * Instructor sometimes suffering from printing and distributing the copies of handouts. 2.3.3 Solutions of the current System * The student who heard the announcement will tell to the one who didn’t know it. * Instructors sometimes assign some student to do printing and distributing of handouts. 2.3 Statement of the Problem 2.4.4 General...

Words: 2711 - Pages: 11

Premium Essay

Information Tecnology

...as Electronic Mail, an E-mail is a system for sending messages from one individual to another via telecommunication links between computers or terminals using dedicated software. ADVANTAGE- Emails are fast. They are delivered at once around the world. No other form of written communication is as fast as an email. DISADVANTAGE- Emails may carry viruses. These are small programs that harm your computer system. They can read out your email address book and send themselves to a number of people around the world. B. NEWSGROUP: A newsgroup is a discussion about a particular subject consisting of notes written to a central Internet site and redistributed through Usenet, a worldwide network of news discussion groups. ADVANTAGE- It is also easier to find a newsgroup, and they sometimes have a moderator, who is someone who makes sure that things stay on track and do not disintegrate into something that is socially unpleasant. DISADVANTAGE- A newsgroup is not as quick as an email or even a mailing list. Very often there will be a delay of at least a day, often longer, before a response is given. C. IRC: Internet Relay Chat (IRC) is an application layer protocol that facilitates the transfer of messages in the form of text. The chat process works on a client/server networking model. IRC clients are computer programs that a user can install on their system. ...

Words: 2374 - Pages: 10

Premium Essay

Facebook Computer and Network Security

...Computer and Network Security Abstract Facebook began in 2004 as a kind of online directory for undergraduates at Harvard University, created and launched by Harvard students Mark Zuckerberg et al. Two months later, the site expanded to include other Ivy League schools. After that, the college network slowly grew and by the end of 2010, Facebook has over 500 million active users. Facebook is social networking. People have been “facebooking” each other for about 6 years now, making Facebook the most used social network worldwide. The purpose of this paper is to briefly describe Facebook’s history as well as also discuss Facebook’s inner working, covering its architecture and front/backend infrastructure, pretty much the nuts and bolts holding Facebook together. In closing, if looked past all of the features and innovations the main idea behind Facebook is really very basic, keeping people connected. Facebook realizes the power of social networking and is constantly innovating to keep their service the best in the business. Overview and Structure of the Organization Facebook is a social network service and website launched in February 2004 that is operated and privately owned by Facebook, Inc. As of July 2010, Facebook has more than 500 million active users, which is about one person for every fourteen in the world. Users may create a personal profile, add other users as friends and exchange messages, including automatic notifications when they update their profile. Additionally...

Words: 2589 - Pages: 11

Premium Essay

Ljfdahldfalk

...White Paper Delivering Actionable Customer Intelligence Synopsis Business intelligence and Analytics are playing an increasingly pivotal role in decision making, as organizations realize the true value of information and how it can affect the bottom line. The majority of successful organizations will employ business analysis staff to ensure that the data they get is properly interpreted to help support business-critical choices and strategies. By making use of the swathes of data available about any number of metrics within a company, clear and relevant insight into issues can be translated into more effective decisions and outcomes, based on verifiable knowledge. But these decisions often need to be taken quickly, across multiple departments and divisions, and this need for agility is only going to increase. Customers are only one half of the key to the survival and eventual prosperity of any business, and so it is to the data they provide which organizations must turn if they are to uncover the key themes which require immediate action. Genuinely actionable customer intelligence is the other side of the equation – information which metaphorically ‘gathers dust on a shelf’ is completely pointless. Gathering data must be done in conjunction with a proper strategy for analysis in mind, otherwise it simply becomes a number-gathering exercise in futility. In this paper, we look at the opportunities and limitations offered by current models of data analysis, and how adopting...

Words: 3011 - Pages: 13

Free Essay

Voice& Data

...Between voice and data networks the way we communicate has evolved momentously. A data network is an electronic communications process that allows for the orderly transmission and reception of data. Voice Networks are local, national, or international networks used to carry voice or telephone traffic. Cellular network technology supports a ranked configuration formed by based transceiver station (BTS), mobile switching center (MSC), home locations registers (HLR), and public switched telephone network (PSTN). These cells composed provide radio reporting over large physical areas. A landline is a fixed line or a wire line such as, a metal wire or fiber optic cable. A landline is also used to describe a linking amongst two or more points that are comprised of a dedicated physical cable. Landlines work by transmitting voice and data signals through copper wire through electric pulses which means landline will still work even if there was a black out or satellite intervention. A pager is a wireless telecommunications device that receives and displays numeric or text messages, or receives and announces voice messages (Anonymous, 2014). Pagers function as a part of a paging system which contains one or more fixed transmitters. Businesses frequently assimilate paging systems with their Voice-mail and PBX systems, assigning pagers to telephone extensions, and they set-up web portals to incorporate pagers into other parts of their enterprise. Text messaging has change the way we...

Words: 610 - Pages: 3

Premium Essay

Meditech - Case 1

...been front and center in the company’s strategy to achieve the leadership position in the data storage market, but it did not begin that way. In 1987, prior to the full customer centric commitment, as EMC was expanding into disk drives and memory cards, sales personnel knowingly shipped defective products in order to meet sales quotas; hardly a customer centric action. (Steenburgh & Avery 2011). In 1992, with the naming of Mike Ruettger as CEO, the company made a full – fledged commitment to customer service as a competitive advantage. The customer’s voice was to become top priority and inform EMC processes and corporate culture. Ruettger launched an initiative called TCE, Total Customer Experience, with the objective to exceed customer expectations. In transforming the EMC culture, it was critical that Ruettger’s team redesign performance metrics, incentives, accountability, and oversight procedures. They established a “Voice of the Customer” team to analyze loyalty surveys, communicate learning, and make recommendations to improve processes that touch the customer. Customer service became every employees’ responsibility, including top management; who during times of crisis, were made available to customers after certain time periods had elapsed without a full customer solution. While the aspects mentioned above are critical to a customer centric strategy, the most important for EMC was their willingness to take responsibility for any breakdowns that their...

Words: 1671 - Pages: 7

Premium Essay

Paradigms In Electronic Library

...addition, the need to empower users to become effective independent searchers within the online environment has led to the continued incorporation of a strong instructional component within the electronic reference setting. The modern reference environment in a typical electronic library is characterized by a set of new types of communication; electronic communication is the primary mode. E-mail has been around for many years, but now it is being used more than ever. In addition, instant messaging has also become popular. Online chat has become the basis of the exploding virtual reference services (VRS). Systems like the LSSI Virtual Reference Toolkit: www.lssi.com/ software offer not only the chat features, but also collaborative browsing and a host of other features that enhance communication with library users in a virtual environment. Other products like the Convey system: www.conveysystems.com/ have the added advantage of voice over IP, permitting voice dialog as part of the virtual reference transaction. New approaches to library instruction: Libraries provide other Web-based instructional resources that students can use on their own to build information literacy skills that can help them conduct library research more effectively. Moreover, Web-based instruction modules are vital to remote students participating in distance education, as they may not even have the opportunity to visit...

Words: 738 - Pages: 3