Distributed Shared Memory

Distributed Shared Memory (DSM) systems allow processes to access data in a shared memory space by reference. Processes in such an environment are able to share arbitrarily complex data structures without the need to flatten and rebuild the structures.
If the processors in such a system are loosely coupled, the underlying communications system is used to provide the abstraction of a shared memory space.
Techniques used are similar to those which provide large virtual memories using relatively small physical memory and disk storage. These methods extend the VM concept across physical memories of multiple spatially disparate computing nodes in a network.

Resolution of address faults may involve importing a block of data from another node into local physical memory using the network. The fact that these blocks of memory can exist in non-local addressing spaces creates problems not experienced by single-node computing.

  • How to maintain a consistent view of the DSM across all nodes
  • Addressing DSM memory blocks
  • Protection of data from illegal access
  • Coping with unexpected node/network failures

The major advantage of DSM is that data can be shared between physically distributed machines without the need for application programmers to write network-specific code. This is because the implementation occurs beneath the applications which can access remote-data in the same way they access local-data.

The four basic algorithms for DSM are:

  • Central server algorithm
  • Migration algorithm
  • Read-replication algorithm
  • Full replication algorithm

IVY

IVY is an OS that provides each process with an address space that is partially private and partially shared.
Disadvantages include:

  • Every node maintains a complete shared memory page table (not scalable to large shared memories)
  • No provision for protection of access data
  • No provision for fault tolerance

Advantages include:

  • Naming/location transparency is provided (data is accessed by unique virtual-address)

MemNet

MemNet was devised to overcome the I/O view of networks in which the kernel is called to initiate network traffic. Instead it provides memory extension through the connection of each node through a MemNet device to a high-speed token ring network.

Disadvantages include:

  • The use of the chunk table renders the scheme unscalable
  • The nature of the ring network restricts the scalability of the system
  • There is no protection over access to data and no provision for fault tolerance.

Advantages include:

  • Location transparency (via the ring-based chunk request mechanism)
  • Naming transparency (via unique virtual addresses)
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License