V Operating system

The V operating system (sometimes V-System) is a microkernel distributed OS which controls a cluster of high-performance workstations on a high-speed network. It consists of a kernel that runs on each workstation and services implemented as processes.
The kernel provides:

  • Network-transparent address spaces in which processes run
  • light-weight processes which can share these address spaces
  • IPC achieved by blocking-RPC

The V kernel provides the means for connection between applications and service modules, whilst not implementing most services. Services implemented in the kernel include:

  • Process management
  • Communication management
  • Device management

Entity identifiers

Processes, process groups and communication endpoints are identified by unique 64-bit numbers called entity identifiers. These are host-independent, so a mapping between entity ID's and host addresses is needed. This mapping is maintained by the kernel using a mapping table together with multicast query messages to other kernels.
Such queries are necessary in a number of situations, for example: to update the information after process migration or to create new table entries.
The kernels cooperate in the allocation of entity ID's to ensure that they are unique on a network-wide basis.


V inter-process communication is request-response based.

  • Clients use the kernel send primitive to request a service from a server process
  • Servers may implement the message or RPC modes of operation (which is used is not apparent to the client, which blocks upon sending a request)
    • Message-mode; uses the receive kernel primitive to receive the next request message. It then invokes a procedure to process the request and sends a response back to the client process. If the server is busy the request is queued.
    • RPC-mode; the server executes as a procedure invoked by the client process. This allows concurrent handling of server requests.

Send kernel calls are trapped by the local IPC module if the server is local and handled by the network IPC module otherwise. Error handling, flow control, etc. are catered for by the use of the response message as an ACKnowledge message and a permission to transmit the next request.


Messages are of a fixed length of 32-bytes, with an optional attached data segment of up to 32-kbytes. The kernel inteface, buffereing and network packet transmission are optimised to handle small messages (as over 50% of traffic is of this sort). These short messages are incorporated into the message header.
Each process descriptor contains a template VMTP header with some fields intialised on process creation.

V IPC time measurements indicate that it is faster to import an 8K block of data from the main memory of another node than to load the same block from local disk…

Multicast and multiprocessing

Multicast is used for dissemination of load information (as required for distributed scheduling) and for synchronisation of the V time servers. Processes may be collected into groups (ie: groups of file servers, groups of processes executing a parallel algorithm, etc). Group IDs are taken from the same name space as process IDs.
A group send can contain a qualifying process ID or a group ID. The qualifying ID is used to select a particular server from a group as required (eg: for process scheduling) which is distributed with one-scheduler per node.
A suspend operation on a process is achieved by sending a message to the process scheduler group with the respective process ID as a qualifier. The kernel routes the message to the host node for the process using the entity ID to host address mapping table (knowledge of the ID of the group of servers is sufficient to allow an action on a particular process).

Memory and files

Physical memory in V is organised as a cache of pages from open files. Each process address space is organised as a set of address ranges called regions. Each region is bound to a portion of an open file, providing the process with a window onto that portion of the file.
The kernel manages binding, block caching and consistency. Consistency is achieved using locking at the file server and a block-owndership protocol.


The V term object refers to a process, an address space, a communications port or an open file. All servers are effectively object managers, implementing names for the set of objects they manage.
An object specified by name can be handled by the server without reference to the name server provided the client can ID which server manages the object.
When an object manager creates a directory of object names it allocates a globally unique directory name which is used as a prefix to the names of all objects in the directory. All such object managers must join the name handling group of processes.
The manager of an object is found by multicasting the character-string object name to the name handling group. When a program is initiated, a table of name-prefix-to-object manager mappings is initialised and this table is maintained while the program runs. These table entries are updated on a "need to know" basis; incorrect entries are detected when they are used. Once a character-string name to object mapping has been established, and object ID is used in subsequent references to the object. These object IDs are formed by concatenating the object manager-id with the local-object-id. The manager-id is an IPC ID specifying the manager which implements the object and the local-object-id specifies the object relative to the manager.
When a manger crashes, its manger-id is invalidated. A new id is allocated to an object manager when it is restarted or on reboot of the system.

Object IDs can only be allocated to objects such as open files or address spaces which have a life that is shorter than their managers. Character-string names are used for long-life objects such as files. If an object manager is replicated or distributed, a client uses the manager group ID on the first access to the object. Subsequent accesses use the specific server ID. If the object migrates or the server entity crashes, the client receives an error on the next access and must then rebind to the new server entity using the server group ID.
Replicated writes are achieved by using the group address to update every copy (the response messages are checked off against the list of individual servers to ensure completeness).


  • Provides the abstraction of a single-machine
  • Resource naming is location transparent; the name prefix defines the server that implements the resource rather than physical location
  • Naming transparency is achieved as resource names are unique at the server level (once the server ID prefix is added the name is network unique)
  • Consistency is guaranteed using a rudimentary scheme
  • Naming lacks uniformity because long term names and short term object IDs are different
  • Security and fault tolerance are not strong points
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License