Telecommunication transaction processing systems

Last updated

Telecommunication networks can generate a vast amount of transactions where each transaction contains information about a particular subscriber's activity. [1] Telecommunication network consist of various interacting devices and platforms. Any transaction carried out by a subscriber is often recorded in multiple devices as it passes through the network. Telecommunication organizations generally need to be able to extract transaction information from these various network elements in order to correctly bill subscribers for the usage on the network. Transaction processing system is a subset of information systems, and in the telecommunications industry, forms an integral part of the management information system. TPS can be regarded as the link between the various network elements and platforms and the information management uses to drive the business.

Contents

Overview of Transaction Processing Systems within a GSM network. TPS Overview.jpg
Overview of Transaction Processing Systems within a GSM network.

CDR Creation

Each activity occurring on a specific network element within the telecommunication network, is recorded by the particular platform. All available information about the particular transaction is recorded and encoded into different formats. The recorded transactions, is called Call Data Records(CDR). Various formats and protocols are used to encode these CDRs, some example encoding protocols used includes ASN.1, XML and CSV . Some platform vendors develop their own encoding protocols for security reasons. Encoded CDRs are grouped into batches and periodically moved to locations from where the TPS can collect the CDR batches in order to process it. [2]

Gathering of CDR files

The TPS is configured to periodically check each platform for any new CDR batches becoming available. The TPS uses standard network protocols, including FTP, SFTP and FTPS to transfer the CDR batch file to the TPS. Some platform vendors have developed their own file transfer protocols, in which case, the TPS need to be customized in order to retrieve the batch files from these platforms. The TPS is also responsible for ensuring the integrity of each file transferred, ensuring that no IP network errors render the file corrupt. Checking for duplicate files from the particular platform is also a responsibility of the TPS to ensure that no file is processed more than once, resulting in duplication of CDRs. Once batch files are retrieved from a particular network-element, they are backed up to long-term media. Some governments require that a record of each and every transaction needs to be stored infinitely in its raw (encoded) format. The batch sizes and frequency differ for each network-element and is also directly related to the number of active subscribers on a particular telecommunication network. [3] [4]

Decoding / Enrichment and Loading of CDRs

Once the TPS has successfully retrieved all the CDR batches, its first task is to decode the CDRs into human readable (ASCII) format. This is probably one of the most important functions of the TPS within the telecommunication industry, as any error in the decoding process will result in inaccurate, unreliable information being passed onto downstream processes and ultimately to the reports viewed by the management. The TPS usually consists of standard functionality to decode all standard CDR encoding protocols like XML, CSV and ASN.1. Should a particular platform vendor encode CDR's in non standard protocols, customization on the TPS is required. Vendors are then required to supply detailed CDR specifications to the suppliers of the TPS in order to enhance the TPS to recognize the CDR formats and also detailed explanations of which information is contained within the CDR about a particular subscriber's activity on the network. Upon successful completion of the decoding, the decoded CDR batches are then checked for duplicate records. Duplicate CDRs are discarded and reported on. The TPS administrators are responsible for verifying the CDRs flagged as duplicates, are in fact true duplicates. [4]

Depending on the TPS software used, the processes following the CDR duplicate level checking can differ. Some high-end TPS combines information from various elements to create master CDRs before being loaded into the 'datastore', whilst a lower-end or entry-level TPS directly loads data upon completion of CDR duplicate checking. The 'datastore' used by the TPS consists of a Relational database management system. Some high-end enterprise RDMS includes the likes of ORACLE, Microsoft SQL Server and MySQL. The choice of RDMS to use, is usually driven by company policy, price and recommendations from the TPS supplier. The RDMS should at least be able to support the volumes of CDRs generated by the particular network.

Once all intermediate processing on the CDR batches have completed, the TPS can now load the data into the particular 'datastore'. Separate entities within the RDMS is used to store the data from the different network platforms. The architecture and layout of the RDMS is usually dictated by the particular TPS. The different entities within the RDMS now contains detailed records of each transaction which occurred on any particular platform within the network. Advanced application administrators can now query the detailed data by means of SQL. Depending on network size and subscriber base and particular network platform, the different RDMS entities can become extremely large and queries on these entities requires a high-end hardware architecture. In order to speed up recurring queries and reports on the CDR detail within the RDMS entities, the TPS often summarizes the data based on certain dimensions, also known as aggregates. [2] [3]

Aggregation of loaded data

Summaries, also known as aggregates, are used to summarize important data that needs to be accessed quickly and easily. Recurring reports i.e., hourly, daily and monthly reports, are sourced from aggregates, as retrieving the information from the detail entities can take huge amounts of time and requires a lot of hardware resources to accomplish. The summaries regularly retrieve data from the detail entities within the RDBMS and summarizes it based on certain required information (dimensions). Key measures are summed in order to give hourly, weekly or daily summaries depending on the reporting requirements. Some governments require that at least 6 months of detailed data must be readily available in the RDMS. They also require that a minimum of 5 years of summaries should be available. Due to the high costs of storage, telecommunication organization regularly archive data not required anymore. Data older than the set retention period is moved from high cost storage to low cost permanent media (i.e. tape). Should it be required that data older than the retention period be retrieved, the data can be either restored from the raw CDRs (backed up by the TPS upon collection), or it can be restored from the datastore backups. [5]

Reporting

Management requires reports in order to assess the performance of the business using various KPI's. The data for these reports are processed and stored by the TPS. Summaries are built in order to achieve optimal query and reporting performance. The last function is to present the data in a user friendly, timely and accurate manner. Various tools exists for administrators to present the data to management. BI Tools like ORACLE BI, Business Objects and Cognos can be configured to source data from the TPS datastore summaries giving users the ability to view the data via a web browser. This gives the user up to date information and the ability to cross tabulate and build higher level summaries. Microsoft Excel can also be used to present the data to the end-user. Some TPS providers have the ability of integrating with Microsoft Excel, giving information to the user in a spreadsheet, from where Excel Pivoting can be used to summarize and present the data. Most BI-Tools have functionality to schedule reports which can send reports via e-mail or even SMS to configured users, ensuring timely availability of reports. [5] [6]

Related Research Articles

HTML has been in use since 1991, but HTML 4.0 was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display.

Dual-tone multi-frequency signaling

Dual-tone multi-frequency signaling (DTMF) is a telecommunication signaling system using the voice-frequency band over telephone lines between telephone equipment and other communications devices and switching centers. DTMF was first developed in the Bell System in the United States, and became known under the trademark Touch-Tone for use in push-button telephones supplied to telephone customers, starting in 1963. DTMF is standardized as ITU-T Recommendation Q.23. It is also known in the UK as MF4.

Electronic data interchange (EDI) is the concept of businesses electronically communicating information that was traditionally communicated on paper, such as purchase orders and invoices. Technical standards for EDI exist to facilitate parties transacting such instruments without having to make special arrangements.

The Session Initiation Protocol (SIP) is a signaling protocol used for initiating, maintaining, and terminating real-time sessions that include voice, video and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol (IP) networks as well as mobile phone calling over LTE (VoLTE).

Intranet Network of private resources in an organization

An intranet is a computer network for sharing information, collaboration tools, operational systems, and other computing services within an organization, usually to the exclusion of access by outsiders. The term is used in contrast to public networks, such as the Internet, but uses most of the same technology based on the Internet Protocol Suite.

Packet analyzer Computer network equipment or software that analyzes network traffic

A packet analyzer or packet sniffer is a computer program or computer hardware such as a packet capture appliance, that can intercept and log traffic that passes over a computer network or part of a network. Packet capture is the process of intercepting and logging traffic. As data streams flow across the network, the analyzer captures each packet and, if needed, decodes the packet's raw data, showing the values of various fields in the packet, and analyzes its content according to the appropriate RFC or other specifications.

ASN.1 Data interface description language

Abstract Syntax Notation One (ASN.1) is a standard interface description language for defining data structures that can be serialized and deserialized in a cross-platform way. It is broadly used in telecommunications and computer networking, and especially in cryptography.

The GPRS core network is the central part of the general packet radio service (GPRS) which allows 2G, 3G and WCDMA mobile networks to transmit IP packets to external networks such as the Internet. The GPRS system is an integrated part of the GSM network switching subsystem.

In telecommunications rating is the activity of determining the cost of a particular call. The rating process involves converting call-related data into a monetary-equivalent value.

A container format or metafile is a file format that allows multiple data streams to be embedded into a single file, usually along with metadata for identifying and further detailing those streams. Notable examples of container formats include archive files and formats used for multimedia playback. Among the earliest cross-platform container formats were Distinguished Encoding Rules and the 1985 Interchange File Format.

Transaction processing is a way of computing that divides work into individual, indivisible operations, called transactions. A transaction processing system (TPS) is a software system, or software/hardware combination, that supports transaction processing.

Telecommunications mediation is a process that converts call data to pre-defined layouts that can be imported by a specific billing system or other OSS applications.

Microsoft SQL Server is a relational database management system developed by Microsoft. As a database server, it is a software product with the primary function of storing and retrieving data as requested by other software applications—which may run either on the same computer or on another computer across a network. Microsoft markets at least a dozen different editions of Microsoft SQL Server, aimed at different audiences and for workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users.

Raima Database Manager

Raima Database Manager is an ACID-compliant embedded database management system designed for use in embedded systems applications. RDM has been designed to utilize multi-core computers, networking, and on-disk or in-memory storage management. RDM provides support for multiple application programming interfaces (APIs): low-level C API, C++, and SQL(native, ODBC, JDBC, ADO.NET, and REST). RDM is highly portable and is available on Windows, Linux, Unix and several real-time or embedded operating systems. A source-code license is also available.

Raima

Raima is a multinational technology company headquartered in Seattle, USA. The company was founded in 1982. Raima develops, sells and supports in-memory and disk-based Relational Database Management Systems that can either be embedded within the application or be in a client/server mode. The company's focus is on OLTP databases with high-intensity transactional processing. Their cross-platform, small-footprint products are made to collect, store, manage and move data.

Managed file transfer (MFT) refers to a technology that provides the secure transfer of data in an efficient and reliable manner. MFT software is marketed to companies as a more secure alternative to using insecure protocols like FTP and HTTP to transfer files. By using an MFT solution, companies can avoid custom scripting and meet compliance requirements.

Asymmetric digital subscriber line DSL service where downstream bandwidth exceeds upstream bandwidth

Asymmetric digital subscriber line (ADSL) is a type of digital subscriber line (DSL) technology, a data communications technology that enables faster data transmission over copper telephone lines than a conventional voiceband modem can provide. ADSL differs from the less common symmetric digital subscriber line (SDSL). In ADSL, bandwidth and bit rate are said to be asymmetric, meaning greater toward the customer premises (downstream) than the reverse (upstream). Providers usually market ADSL as an Internet access service primarily for downloading content from the Internet, but not for serving content accessed by others.

Revenue assurance (RA) telecommunication services, is the use of data quality and process improvement methods that improve profits, revenues and cash flows without influencing demand. This was defined by a TM Forum working group based on research documented in its Revenue Assurance Technical Overview. practically, it can be defined as the process of ensuring that all products and services provided by the Telecom Service Provider are billed as per the commercial agreement with customers', by ensuring network, billing and configuration integrity and accuracy across network platforms and systems. In many telecommunications service providers, revenue assurance is led by a dedicated revenue assurance department.

Wire data is the information that passes over computer and telecommunication networks defining communications between client and server devices. It is the result of decoding wire and transport protocols containing the bi-directional data payload. More precisely, wire data is the information that is communicated in each layer of the OSI model.

Firebase Cloud Messaging (FCM), formerly known as Google Cloud Messaging (GCM), is a cross-platform cloud solution for messages and notifications for Android, iOS, and web applications, which as of 2021 can be used at no cost. Firebase Cloud Messaging allows third-party application developers to send notifications or messages from servers hosted by FCM to users of the platform or end users.

References

  1. Lamb, George.GSM Made Simple. Cordero Consulting, 1997
  2. 1 2 Technical and Development Circle.A Handbook on Billing and Customer Care System in GSM network. Jabalpur, 2008, p. 8.
  3. 1 2 Wikipedia.Billing Mediation Platform. Retrieved 2011-03-20
  4. 1 2 Internet.Telecom Concepts - Telecom Mediation. Retrieved 2011-03-20
  5. 1 2 Netezza.Transforming Telecommunications Business Intelligence. Retrieved 2011-03-20
  6. Cao Longbing; Luo Dan; Luo Chao; Zhang Chengqi.Systematic Engineering in Designing Architecture of Telecommunications Business Intelligence System. University of Sydney, 2003, p. 8.