Newton Data Storage - "Soups"
One of the distinctives of the Apple Newton was that it implemented all storage of data as sets of persistent hierarchical databases called "Soups."
The way to think of it is that each application would put its data into its own "pot of soup," and all sorts of applications could put in their own "ladels" to pull out bits of data. As a result, a calendar application might look into the Address Book's "soup" to reference contact information; any application could store or access notes in the Note Book's "soup," and the likes.
<firstname.lastname@example.org> observes that this may be characterized
as an Environment, sort of like a
Lisp scoped environment except that unlike typical dynamic, static, or
lexical scopes, this is a sort of shared "dynamic repository" that any
procedure may access or make deposits to.
Unlike Lisp environments, which generally reside in memory, Newton "soups" are primarily on secondary storage. The point of adding storage to a Newton is, for the most part, to allow having more and bigger bowls of soup.
The really interesting thing about Newton "soups" was that they were designed to cope well with storage being removed. "Union soups" provide a way of merging together multiple soup stores on different devices. For instance, you might divide your address lists into a set of critical contacts that sit in internal memory, and then have others that can reside on removable memory cards. There would, as a result, need to be a "union soup" to merge this into a single address book.
If you took out the memory card containing (let's say) your work-related contacts, the union soup would be modified to indicate that those contacts were gone, and any applications referencing those addresses would be notified so that they could remove the icons that allow the user to access the addresses. Once the card comes back, the union soup gets augmented, and display are updated in turn.
Something similar could be implemented by having each "storage device" be used to store a hash table for an "associative array" (in the style of DBM ). Handling the event processing needed to update applications using the data would definitely not be a trivial matter, though...
XDBM - database system for storing XML.
When transferring XML -based data from one place to another, it is most certainly convenient to transfer it in text form, as that is eminently readable.
On the other hand, while you are processing it, it is likely that any given application will not find "a stream of bytes" to be the most convenient representation.
XDBM provides a database form in which you might store an XML document. This offers the following benefits:
No need to parse the document.
Elements are loaded on demand.
If a document is extremely large, and only small portions are used, as would be the case for a search-oriented application (e.g. - where documentation for a program might come as one big XML document), only those elements that get used are actually loaded into memory.
Faster searching, via doing tree-oriented searches for specific elements.
Constraint databases are a fairly new and active area of database research. The key idea is that constraints, such as linear or polynomial equations, are used to represent large, or even infinite, sets in a compact way.
This is a PC-based application to do what might be loosely termed text-based data mining.
An engine for a nonrelational DBMS written in OCAML
By linking together a bunch of such association lists, you can figure out who knows whom, and try to find "minimal degrees of separation" of people. Probably handy for people trying to stalk movie stars.
A document-oriented distributed database implemented in Section 10.1.
Pick is a database system created in the late '60s/early '70s by Dick Pick (I don't name 'em) that provided an integrated database application environment. That is, it included:
A database repository
A database metadata repository (e.g. "Data Dictionary")
"Standardized" languages (their versions of BASIC, C, and possibly COBOL)
"Standardized" capabilities for report generation, process management, and data access.
By doing these things, it provided the sorts of things that COBOL systems provide that are useful for business programming that SQL environments still have not standardized.
Note that in this case "standard" is not referring to the availability of any sort of de jure standard, but rather to the notion that all implementations of "Pick-like" systems provide fairly much compatible subsystems in each of the described areas.
Proponents of Pick make some fairly radical claims about power and performance; there is little doubt but that Pick systems can do a decent job for small and midsized firms; whether it is up to dealing with huge databases where it would compete with things like Oracle Parallel Server and the likes is somewhat questionable.
Note that MaVerick runs atop an SQL DBMS such as PostgreSQL .
Providing a quasi-theoretically-based discussion of "non-first-normal-form databases". IBM sells Universe and UniData, which are Pick-like databases.
This discusses the design of Qddb , which is, more or less, a nested relational database system.
MUMPS is short for the Massachusetts General Hospital Utility Multi-Programming System. It is a programming language with extensive tools for the support of database management systems. MUMPS was originally used for medical records and is now widely used where multiple users access the same databases simultaneously, e.g. banks, stock exchanges, travel agencies, hospitals.
On one consulting engagement, I came rather close to needing to learn a touch of MUMPS. I used C and the VMS "DCL" scripting language instead...
Now with additional functionality to integrate with relational databases, notably including PostgreSQL.
Licensed using the GPL
A MUMPS implementation that can use "PostgreSQL" as the storage backend.
This appears to be what you get when you take MUMPS, add OO and Multidimensional arrays, SQL, and join it together into a Main-Memory Database...