Original author(s) | Steven Asherman, Arun Kumar |
---|---|
Developer(s) | Content Galaxy Inc. |
Stable release | 7.61 / November 11, 2020 |
Written in | C++, C# |
Operating system | Microsoft Windows |
Platform | Microsoft Visual Studio, .NET |
Type | Web application framework |
License | GPLv3 |
Website | contentgalaxy |
The Base One Foundation Component Library (BFC) is a rapid application development toolkit for building secure, fault-tolerant, database applications on Windows and ASP.NET. In conjunction with Microsoft's Visual Studio integrated development environment, BFC provides a general-purpose web application framework for working with databases from Microsoft, Oracle, IBM, Sybase, and MySQL, running under Windows, Linux/Unix, or IBM iSeries or z/OS. BFC includes facilities for distributed computing, batch processing, queuing, and database command scripting, and these run under Windows or Linux with Wine.
BFC is based on a database-centric architecture whose cross-DBMS data dictionary plays a central role in supporting data security, validation, optimization, and maintainability features. [1] Some of BFC’s core technologies are based on underlying U.S. patents in database communication and high precision arithmetic. [2] [3] [4]
BFC supports a unique model of large scale, distributed computing. [5] [6] This is intended to reduce the vulnerability and performance impact of either depending on a centralized process to distribute tasks or communicating directly between nodes through messages. Deutsche Bank made use of the initial version of BFC to build its securities' custody system and is one of the earliest successful examples of commercial grid computing. [7] [8]
BFC implements a grid computing architecture that revolves around the model of a "virtual supercomputer" composed of loosely coupled "batch job servers". These perform tasks that are specified and coordinated through database-resident control structures and queues. The model is virtual, as it uses the available processing power and resources of ordinary servers and database systems, which can also continue to work in their previous roles. The result is termed a virtual supercomputer because it presents itself as a single, unified computational resource that can be scaled both in capacity and processing power.[ citation needed ]
BFC was originally developed by Base One International Corp., funded by projects done for Marsh & McLennan and Deutsche Bank that started in the mid-1990s. [9] Beginning in 1994, Johnson & Higgins (later acquired by Marsh & McLennan), built Stars, an insurance risk management system, using components known as ADF (Application Development Framework). ADF was the predecessor of BFC and was jointly developed by Johnson & Higgins and Base One programmers, with Base One retaining ownership of ADF, and Johnson & Higgins retaining all rights to Stars risk management software. [10] [11] In 2014, BFC was acquired by Content Galaxy Inc., whose video publishing service was built with BFC. [12]
The name "BFC" was a play on MFC Microsoft Foundation Classes, which BFC extended through Visual C++ class libraries to facilitate the development of large-scale, client/server database applications. Developers can incorporate BFC components into web and Windows applications written in any of the major Microsoft programming languages (C#, ASP.NET, Visual C++, VB.NET). They can also use a variety of older technologies, including COM/ActiveX, MFC, and Crystal Reports. BFC works with both managed and unmanaged code, and it can be used to construct either thin client or rich client applications, with or without browser-based interfaces. [ citation needed ]
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may be on the same device. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.
In computing, a file server is a computer attached to a network that provides a location for shared disk access, i.e. storage of computer files that can be accessed by workstations within a computer network. The term server highlights the role of the machine in the traditional client–server scheme, where the clients are the workstations using the storage. A file server does not normally perform computational tasks or run programs on behalf of its client workstations.
A server is a computer that provides information to other computers called "clients" on a computer network. This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.
In computing, a windowing system is a software suite that manages separately different parts of display screens. It is a type of graphical user interface (GUI) which implements the WIMP paradigm for a user interface.
In computer science, inter-process communication (IPC), also spelled interprocess communication, are the mechanisms provided by an operating system for processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing.
Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing, the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.
Microsoft Transaction Server (MTS) was software that provided services to Component Object Model (COM) software components, to make it easier to create large distributed applications. The major services provided by MTS were automated transaction management, instance management and role-based security. MTS is considered to be the first major software to implement aspect-oriented programming.
In computer science, the event loop is a programming construct or design pattern that waits for and dispatches events or messages in a program. The event loop works by making a request to some internal or external "event provider", then calls the relevant event handler.
Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.
Microsoft Application Virtualization is an application virtualization and application streaming solution from Microsoft. It was originally developed by Softricity, a company based in Boston, Massachusetts, acquired by Microsoft on July 17, 2006. App-V represents Microsoft's entry to the application virtualization market, alongside their other virtualization technologies such as Hyper-V, Microsoft User Environment Virtualization (UE-V), Remote Desktop Services, and System Center Virtual Machine Manager.
Configurable Network Computing or CNC is JD Edwards's (JDE) client–server proprietary architecture and methodology. Now a division of the Oracle Corporation, Oracle continues to sponsor the ongoing development of the JD Edwards Enterprise Resource Planning (ERP) system, While highly flexible, the CNC architecture is proprietary and, as such, it cannot be exported to any other systems. While the CNC architecture's chief 'Claim to fame', insulation of applications from the underlying database and operating systems, were largely superseded by modern web-based technology, nevertheless CNC technology continues to be at the heart of both JD Edwards' One World and Enterprise One architecture and is planned to play a significant role Oracle's developing fusion architecture initiative. While a proprietary architecture, CNC is neither an Oracle nor JDE product offering. The term CNC also refers to the systems analysts who install, maintain, manage and enhance this architecture. CNC's are also one of the three technical areas in the JD Edwards Enterprise Resource Planning ERP which include developer/report writer and functional/business analysts.
Base One International Corp. was an American company that specialized in developing software for constructing database applications and distributed computing systems. Headquartered in New York City, the company was founded in 1993 and expanded in 1997 through the founding of its subsidiary, Base One Software Pvt. Ltd., in Bangalore, India. Base One held a number of U.S. patents related to its technologies for distributed computing and high-precision arithmetic.
Xgrid is a proprietary grid computing program and protocol developed by the Advanced Computation Group subdivision of Apple Inc.
In computing, virtualization (v12n) is a series of technologies that allows dividing of physical computing resources into a series of virtual machines, operating systems, processes or containers.
A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.
Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.
Microsoft Azure, or just Azure, is the cloud computing platform developed by Microsoft. It has management, access and development of applications and services to individuals, companies, and governments through its global infrastructure. It also provides capabilities that are usually not included within other cloud platforms, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Microsoft Azure supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.
Windows HPC Server 2008, released by Microsoft on 22 September 2008, is the successor product to Windows Compute Cluster Server 2003. Like WCCS, Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server software is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, an MPI library based on open-source MPICH2, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).
Techila Distributed Computing Engine is a commercial grid computing software product. It speeds up simulation, analysis and other computational applications by enabling scalability across the IT resources in user's on-premises data center and in the user's own cloud account. Techila Distributed Computing Engine is developed and licensed by Techila Technologies Ltd, a privately held company headquartered in Tampere, Finland. The product is also available as an on-demand solution in Google Cloud Launcher, the online marketplace created and operated by Google. According to IDC, the solution enables organizations to create HPC infrastructure without the major capital investments and operating expenses required by new HPC hardware.
{{cite web}}
: CS1 maint: bot: original URL status unknown (link)