Smalltalk80LanguageImplementation:Chapter 15
- Chapter 15 Multiple Independent Process
Multiple Independent Process
Object
Magnitude
Character
Date
Time
Number
Float
Fraction
Integer
LargeNegativeInteger
LargePositiveInteger
SmallInteger
LookupKey
Association
Link
Process***
Collection
SequenceableCollection
LinkedList
Semaphore***
ArrayedCollection
Array
Bitmap
DisplayBitmap
RunArray
String
Symbol
Text
ByteArray
Interval
OrderedCollection
SortedCollection
Bag
MappedCollection
Set
Dictionary
IdentifyDictionary
Stream
PositionableStream
ReadStream
WriteStream
ReadWriteStream
ExternalStream
FileStream
Random
File
FileDirectory
FilePage
UndefinedObject
Boolean
False
True
ProcessorScheduler***
Delay***
SharedQueue***
Behavior
ClassDescription
Class
MetaClass
Point
Rectangle
BitBit
CharacterScanner
Pen
DisplayObject
DisplayMedium
Form
Cursor
DisplayScreen
InfiniteForm
OpaqueForm
Path
Arc
Circle
Curve
Line
LinearFit
Spline
The Smalltalk-80 system provides support for multiple independent processes with three classes named Process, ProcessorScheduler, and Semaphore. A Process represents a sequence of actions that can be carried out independently of the actions represented by other Processes. A ProcessorScheduler schedules the use of the Smalltalk-80 virtual machine that actually carries out the actions represented by the Processes in the system. There may be many Processes whose actions are ready to be carried out and ProcessorScheduler determines which of these the virtual machine will carry out at any particular time. A Semaphore allows otherwise independent processes to synchronize their actions with each other. Semaphores provide a simple form of synchronous communication that can be used to create more complicated synchronized interactions. Semaphores also provide synchronous communication with asynchronous hardware devices such as the user input devices and realtime clock.
Semaphores are often not the most useful synchronization mechanism. Instances of SharedQueue and Delay use Semaphores to satisfy the two most common needs for synchronization. A SharedQueue provides safe transfer of objects between independent processes and a Delay allows a process to be synchronized with the real time clock.
Processes
A process is a sequence of actions described by expressions and performed by the Smalltalk-80 virtual machine. Several of the processes in the system monitor asynchronous hardware devices. For example, there are processes monitoring the keyboard, the pointing device, and the realtime clock. There is also a process monitoring the available memory in the system. The most important process to the user is the one that performs the actions directly specified by the user, for example, editing text, graphics, or class definitions. This user interface process must communicate with the processes monitoring the keyboard and pointing device to find out what the user is doing. Processes might be added that update a clock or a view of a user-defined object.
A new process can be created by sending the unary message fork to a block. For example, the following expression creates a new process to display three clocks named EasternTime, MountainTime, and PacificTime on the screen.
[EasternTime display.
MountainTime display.
PacificTime display] fork
The actions that make up the new process are described by the block's expressions. The message fork has the same effect on these expressions as does the message value, but it differs in the way the result of the message is returned. When a block receives value, it waits to return until all of its expressions have been executed. For example, the following expression does not produce a value until all three clocks have been completely displayed.
[EasternTime display.
MountainTime display.
PacificTime display] value
The value returned from sending a block value is the value of the last expression in the block. When a block receives fork, it returns immediately, usually before its expressions have been executed. This allows the expressions following the fork message to be executed independently of the expressions in the block. For example, the following two expressions would result in the contents of the collection nameList being sorted independently of the three clocks being displayed.
[EasternTime display.
MountainTime display.
PacificTime display] fork.
alphabeticalList ← nameList sort
The entire collection may be sorted before any of the clocks are displayed or all of the clocks may be displayed before the collection begins sorting. The occurrence of either one of these extreme cases or an intermediate case in which some sorting and some clock display are interspersed is determined by the way that display and sort are written. The two processes, the one that sends the messages fork and sort, and the one that sends display, are executed independently. Since a block's expressions may not have been evaluated when it returns from fork, the value of fork must be independent of the value of the block's expressions. A block returns itself as the value of fork.
Each process in the system is represented by an instance of class Process. A block's response to fork is to create a new instance of Process and schedule the processor to execute the expressions it contains. Blocks also respond to the message newProcess by creating and returning a new instance of Process, but the virtual machine is not scheduled to execute its expressions. This is useful because, unlike fork, it provides a reference to the Process itself. A Process created by newProcess is called suspended since its expressions are not being executed. For example, the following expression creates two new Processes but does not result in either display or sort being sent.
clockDisplayProcess ← [ EasternTime display ] newProcess.
sortingProcess ← [ alphabeticalList ← nameList sort ] newProcess
The actions represented by one of these suspended Processes can actually be carried out by sending the Process the message resume. The following two expressions would result in display being sent to EasternTime and sort being sent to nameList.
clockDisplayProcess resume.
sortingProcess resume
Since display and sort would be sent from different Processes, their execution may be interleaved. Another example of the use of resume is the implementation of fork in BlockContext.
fork
self newProcess resume
A complementary message, suspend, returns a Process to the suspended state in which the processor is no longer executing its expressions. The message terminate prevents a Process from ever running again, whether it was suspended or not.
changing process state | |
resume | Allow the receiver to be advanced. |
suspend | Stop the advancement of the receiver in such a way that it can resume its progress later (by sending it the message resume). |
terminate | Stop the advancement of the receiver forever. |
Process instance protocol |
Blocks also understand a message with selector newProcessWith: that creates and returns a new Process supplying values for block arguments. The argument of newProcessWith: is an Array whose elements are used as the values of the block arguments. The size of the Array should be equal to the number of block arguments the receiver takes. For example,
displayProcess ← [ :clock | clock display ]
newProcessWith: (Array with: MountainTime)
The protocol of BlockContext that allows new Processes to be created is shown on the following page.
scheduling | |
fork | Create and schedule a new Process for the execution of the expressions the receiver contains. |
newProcess | Answer a new suspended Process for the execution of the expressions the receiver contains. The new Process is not scheduled. |
newProcessWith: argumentArray | Answer a new suspended Process for the execution of the expressions the receiver contains supplying the elements of argumentArray as the values of the receiver's block arguments. |
BlockContext instance protocol |
Scheduling
The Smalltalk-80 virtual machine has only one processor capable of carrying out the sequence of actions a Process represents. So when a Process receives the message resume, its actions may not be carried out immediately. The Process whose actions are currently being carried out is called active. Whenever the active Process receives the message suspend or terminate, a new active Process is chosen from those that have received resume. The single instance of class ProcessorScheduler keeps track of all of the Processes that have received resume. This instance of ProcessorScheduler has the global name Processor. The active Process can be found by sending Processor the message activeProcess. For example, the active Process can be terminated by the expression
Processor activeProcess terminate
This will be the last expression executed in that Process. Any expressions following it in a method would never be executed. Processor will also terminate the active Process in response to the message terminateActive.
Processor terminateActive
Priorities
Ordinarily, Processes are scheduled for the use of the processor on a simple first-come first-served basis. Whenever the active Process receives suspend or terminate, the Process that has been waiting the longest will become the new active Process. In order to provide more control of when a Process will run, Processor uses a very simple priority mechanism. There are a fixed number of priority levels numbered by ascending integers. A Process with a higher priority will gain the use of the processor before a Process with a lower priority, independent of the order of their requests. When a Process is created (with either fork or newProcess), it will receive the same priority as the Process that created it. The priority of a Process can be changed by sending it the message priority: with the priority as an argument. Or the priority of a Process can be specified when it is forked by using the message forkAt: with the priority as an argument. For example, consider the following expressions executed in a Process at priority 4.
wordProcess ← [['now' displayAt: 50@100] forkAt: 6.
['is' displayAt: 100@100] forkAt: 5.
'the' displayAt: 150@100]
newProcess.
wordProcess priority: 7.
'time' displayAt: 200@100.
wordProcess resume.
'for' displayAt: 250@100
The sequence of displays on the screen would be as follows.
time
the time
now the time
now is the time
now is the time for
Priorities are manipulated with a message to Processes and a message to BlockContexts.
accessing | |
priority: anInteger | Set the receiver's priority to be anInteger. |
Process instance protocol |
scheduling | |
forkAt: priority | Create a new process for the execution of the expressions the receiver contains. Schedule the new process at the priority level priority. |
BlockContext instance protocol |
The methods in the Smalltalk-80 system do not actually specify priorities with literal integers. The appropriate priority to use is always obtained by sending a message to Processor. The messages used to obtain priorities are shown in the protocol for class ProcessorScheduler.
One other message to Processor allows other Processes with the same priority as the active Process to gain access to the processor. The ProcessorScheduler responds to the message yield by suspending the active Process and placing it on the end of the list of Processes waiting at its priority. The first Process on the list then becomes the active Process. If there are no other Processes at the same priority, yield has no effect.
accessing | |
activePriority | Answer the priority of the currently running process. |
activeProcess | Answer the currently running process. |
process state change | |
terminateActive | Terminate the currently running process. |
yield | Give other processes at the priority of the currently running process a chance to run. |
priority names | |
highIOPriority | Answer the priority at which the most time critical input/output processes should run. |
lowIOPriority | Answer the priority at which most input/output processes should run. |
systemBackgroundPriority | Answer the priority at which system background processes should run. |
timingPriority | Answer the priority at which the system processes keeping track of real time should run. |
userBackgroundPriority | Answer the priority at which background processes created by the user should run. |
userInterruptPriority | Answer the priority at which processes created by the user and desiring immediate service should run. |
userSchedulingPriority | Answer the priority at which the user interface processes should run. |
ProcessorScheduler instance protocol |
The messages to ProcessorScheduler requesting priorities were listed in alphabetical order above since this is the standard for protocol descriptions. The same messages are listed below from highest priority to lowest priority along with some examples of Processes that might have that priority.
priority names | |
timingPriority | The Process monitoring the real time clock (see description of class Wakeup later in this chapter). |
highIOPriority | The Process monitoring the local network communication device. |
lowIOPriority | The Process monitoring the user input devices and the Process distributing packets from the local network. |
userInterruptPriority | Any Process forked by the user interface that should be executed immediately. |
userSchedulingPriority | The Process performing actions specified through the user interface (editing, viewing, programming, and debugging). |
userBackgroundPriority | Any Process forked by the user interface that should be executed only when nothing else is happening. |
systemBackgroundPriority | A system Process that should be executed when nothing else is happening. |
ProcessorScheduler instance protocol |
Semaphores
The sequence of actions represented by a Process is carried out asynchronously with the actions represented by other Processes. The function of one Process is independent of the function of another. This is appropriate for Processes that never need to interact. For example, the two Processes shown below that display clocks and sort a collection probably do not need to interact with each other at all.
[EasternTime display.
MountainTime display.
PacificTime display] fork.
alphabeticalList ← nameList sort
However, some Processes that are substantially independent must interact occasionally. The actions of these loosely dependent Processes must be synchronized while they interact. Instances of Semaphore provide a simple form of synchronized communication between otherwise independent Processes. A Semaphore provides for the synchronized communication of a simple (~1 bit of information) signal from one process to another. A Semaphore provides a nonbusy wait for a Process that attempts to consume a signal that has not been produced yet. Semaphores are the only safe mechanism provided for interaction between Processes. Any other mechanisms for interaction should use Semaphores to insure their synchronization.
Communication with a Semaphore is initiated in one Process by sending it the message signal. On the other end of the communication, another Process waits to receive the simple communication by sending wait to the same Semaphore. It does not matter in which order the two messages are sent, the Process waiting for a signal will not proceed until one is sent. A Semaphore will only return from as many wait messages as it has received signal messages. If a signal and two waits are sent to a Semaphore, it will not return from one of the wait messages. When a Semaphore receives a wait message for which no corresponding signal was sent, it suspends the process from which the wait was sent.
communication | |
signal | Send a signal through the receiver. If one or more Processes have been suspended trying to receive a signal, allow the one that has been waiting the longest to proceed. If no Process is waiting, remember the excess signal. |
wait | The active Process must receive a signal through the receiver before proceeding. If no signal has been sent, the active Process will be suspended until one is sent. |
Semaphore instance protocol |
The processes that have been suspended will be resumed in the same order in which they were suspended. A Process's priority is only taken into account by Processor when scheduling it for the use of the processor. Each Process waiting for a Semaphore will be resumed on a firstcome first-served basis, independent of its priority. A Semaphore allows a Process to wait for a signal that has not been sent without using processor capacity. The Semaphore does not return from wait until signal has been sent. One of the main advantages of creating an independent process for a particular activity is that, if the process requires something that is not available, other processes can proceed while the first process waits for it to become available. Examples of things that a process may require and that may or may not be available are hardware devices, user events (keystrokes or pointing device movements), and shared data structures. A specific time of day can also be thought of as something that might be required for a process to proceed.
Mutual Exclusion
Semaphores can be used to ensure mutually exclusive use of certain facilities by separate Processes. For example, a Semaphore might be used to provide a data structure that can be safely accessed by separate Processes. The following definition of a simple first-in first-out data structure does not have any provision for mutual exclusion.
class name | SimpleQueue |
superclass | Object |
instance variable names | contentsArray readPosition writePosition |
class methods | instance creation
new
↑self new: 10
new: size
↑super new init: size
|
instance methods | accessing
next
| value |
readPosition = writePosition
ifTrue: [self error: 'empty queue']
ifFalse: [value ← contentsArray at: readPosition.
contentsArray at: readPosition put: nil.
readPosition ← readPosition + 1.
↑value]
nextPut: value
writePosition > contentsArray size
ifTrue: [self makeRoomForWrite].
contentsArray at: writePosition put: value.
writePosition ← writePosition +1.
↑value
size
↑writePosition - readPosition
testing
isEmpty
↑writePosition = readPosition
private
init: size
contentsArray ← Array new: size.
readPosition ← 1.
writePosition ← 1
makeRoomForWrite
| contentsSize |
readPosition = 1
ifTrue: [contentsArray grow]
ifFalse:
[contentsSize ← writePosition - readPosition.
1 to: contentsSize do:
[ :index |
contentsArray
at: index
put: (contentsArray at: index + readPosition - 1)].
readPosition ← 1.
writePosition ← contentsSize + 1]
|
A SimpleQueue remembers its contents in an Array named contentsArray and maintains two indices into the contentsArray named readPosition and writePosition. New contents are added at writePosition and removed at readPosition. The private message makeRoomForWrite is sent when there is no room at the end of contentsArray for remembering a new object. If contentsArray is completely full, its size is increased. Otherwise, the contents are moved to the first of contentsArray.
The problem with sending to a SimpleQueue from different Processes is that more than one Process at a time may be executing the method for next or nextPut:. Suppose a SimpleQueue were sent the message next from one Process, and had just executed the expression
value ← contentsArray at: readPosition
when a higher priority Process woke up and sent another next message to the same SimpleQueue. Since readPosition has not been incremented, the second execution of the expresson above will bind the same object to value. The higher priority Process will remove the reference to the object from contentsArray, increment the readPosition and return the object it removed. When the lower priority Process gets control back, readPosition has been incremented so it removes the reference to the next object from contentsArray. This object should have been the value of one of the next messages, but it is discarded and both next messages return the same object.
To ensure mutual exclusion, each Process must wait for the same Semaphore before using a resource and then signal the Semaphore when it is finished. The following subclass of SimpleQueue provides mutual exclusion so that its instances can be used from separate Processes.
class name | SimpleSharedQueue |
superclass | SimpleQueue |
instance variable names | accessProtect |
instance methods | accessing
next
| value |
accessProtect wait.
value ← super next.
accessProtect signal.
↑value
nextPut: value
accessProtect wait.
super nextPut: value.
accessProtect signal.
↑value
private
init: size
super init: size.
accessProtect ← Semaphore new.
accessProtect signal
|
Since mutual exclusion is a common use of Semaphores, they include a message for it. The selector of this message is critical:. The implementation of critical: is as follows.
critical: aBlock
| value |
self wait.
value ← aBlock value.
self signal.
↑value
A Semaphore used for mutual exclusion must start out with one excess signal so the first Process may enter the critical section. Class Semaphore provides a special initialization message, forMutualExclusion, that signals the new instance once.
mutual exclusion | |
critical: aBlock | Execute aBlock when no other critical blocks are executing. |
Semaphore instance protocol |
instance creation | |
forMutualExclusion | Answer a new Semaphore with one excess signal. |
Semaphore class protocol |
The implementation of SimpleSharedQueue could be changed to read as follows.
class name | SimpleSharedQueue |
superclass | SimpleQueue |
instance variable names | accessProtect |
instance methods | accessing
next
| value |
accessProtect critical: [ value ← super next ].
↑value
nextPut: value
accessProtect critical: [super nextPut: value ].
↑value
private
init: size
super init: size.
accessProtect ← Semaphore forMutualExclusion
|
Resource Sharing
In order for two Processes to share a resource, mutually exclusive access to it is not enough. The Processes must also be able to communicate about the availability of the resource. SimpleSharedQueue will not get confused by simultaneous accesses, but if an attempt is made to remove an object from an empty SimpleSharedQueue, an error occurs. In an environment with asynchronous Processes, it is inconvenient to guarantee that attempts to remove objects (by sending next) will be made only after they have been added (by sending nextPut:). Therefore, Semaphores are also used to signal the availability of shared resources. A Semaphore representing a resource is signalled after each unit of the resource is made available and waited for before consuming each unit. Therefore, if an attempt is made to consume a resource before it has been produced, the consumer simply waits.
Class SafeSharedQueue is an example of how Semaphores can be used to communicate about the availability of resources. SafeSharedQueue is similar to SimpleSharedQueue, but it uses another Semaphore named valueAvailable to represent the availability of the contents of the queue. SafeSharedQueue is not in the Smalltalk-80 system, it is described here only as an example. SharedQueue is the class that is actually used to communicate between processes in the system. SharedQueue provides functionality similar to SafeSharedQueue's. The protocol specification for SharedQueue will be given later in this chapter.
class name | SafeSharedQueue |
superclass | SimpleQueue |
instance variable names | accessProtect valueAvailable |
instance methods | accessing
next
| value |
valueAvailable wait.
accessProtect critical: [ value ← super next ].
↑value
nextPut: value
accessProtect critical: [ super nextPut: value ].
valueAvailable signal.
↑value
private
init: size
super init: size.
accessProtect ← Semaphore forMutualExclusion.
valueAvailable ← Semaphore new
|
Hardware Interrupts
Instances of Semaphore are also used to communicate between hardware devices and Processes. In this capacity, they take the place of interrupts as a means of communicating about the changes of state that hardware devices go through. The Smalltalk-80 virtual machine is specified to signal Semaphores on three conditions.
- user event: a key has been pressed on the keyboard, a button has been pressed on the pointing device, or the pointing device has moved.
- timeout: a specific value of the millisecond clock has been reached.
- low space: available object memory has fallen below certain limits.
These three Semaphores correspond to three Processes monitoring user events, the millisecond clock and memory utilization. Each monitoring Process sends wait to the appropriate Semaphore suspending itself until something of interest happens. Whenever the Semaphore is signalled, the Process will resume. The virtual machine is notified about .these three types of monitoring by primitive methods. For example, the timeout signal can be requested by a primitive method associated with the message signal:atTime: to Processor.
Class Wakeup is an example of how one of these Semaphores can be used. Wakeup provides an alarm clock service to Processes by monitoring the millisecond clock. Wakeup is not in the Smalltalk-80 system; it is described here only as an example. Delay is the class that actually monitors the millisecond clock in the Smalltalk-80 system. Delay provides functionality similar to Wakeup's. The protocol specification for Delay will be given later in this chapter.
Wakeup provides a message that suspends the sending Process for a specified number of milliseconds. The following expression suspends its Process for three quarters of a second.
Wakeup after: 750
When Wakeup receives an after: message, it allocates a new instance which remembers the value of the clock at which the wakeup should occur. The new instance contains a Semaphore on which the active Process will be suspended until the wakeup time is reached. Wakeup keeps all of its instances in a list sorted by their wakeup times. A Process monitors the virtual machine's millisecond clock for the earliest of these wakeup times and allows the appropriate suspended Process to proceed. This Process is created in the class method for initializeTimingProcess. The Semaphore used to monitor the clock is referred to by a class variable named TimingSemaphore. The virtual machine is informed that the clock should be monitored with the following message found in the instance method for nextWakeup.
Processor signal: TimingSemaphore atTime: resumption Time
The list of instances waiting for resumption is referred to by a class variable named PendingWakeups. There is another Semaphore named AccessProtect that provides mutually exclusive access to PendingWakeups.
class name | Wakeup |
superclass | Object |
instance variable names | alarmTime alarmSemaphore |
class variable names | PendingWakeups AccessProtect TimingSemaphore |
class methods | alarm clock service
after: millisecondCound
(self new sleepDuration: millisecondCount) waitForWakeup
class initialization
initialize
TimingSemaphore ← Semaphore new.
AccessProtect ← Semaphore forMutualExclusion.
PendingWakeups ← SortedCollection new.
self initializeTimingProcess
initializeTimingProcess
[[true]
whileTrue:
[TimingSemaphore wait.
AccessProtect wait.
PendingWakeups removeFirst wakeup.
PendingWakeups isEmpty
ifFalse: [PendingWakeups first nextWakeup].
AccessProtect signal]]
forkAt: Processor timingPriority
|
instance methods | process delay
waitForWakup
AccessProtect wait.
PendingWakeups add: self.
PendingWakeups first = = self
ifTrue: [self nextWakeup].
AccessProtect signal.
alarmSemaphore wait
comparison
< otherWakeup
↑alarmTime < otherWakeup wakeupTime
accessing
wakeupTime
↑alarmTime
private
nextWakeup
Processor signal: TimingSemaphore atTime: resumptionTime
sleepDuration: millisecondCount
alarmTime ← Time millisecondClockValue + millisecondCount.
alarmSemaphore ← Semaphore new
wakeup
alarmSemaphore signal
|
Class SharedQueue is the system class whose instances Provide safe communication of objects between Processes. Both its protocol and its implementation are similar to the SafeSharedQueue example shown earlier in this chapter.
accessing | |
next | Answer with the first object added to the receiver that has not yet been removed. If the receiver is empty, suspend the active Process until an object is added to it. |
nextPut: value | Add value to the contents of the receiver. If a Process has been suspended waiting for an object, allow it to proceed. |
SharedQueue instance protocol |
Class Delay
A Delay allows a Process to be suspended for a specified amount of time. A Delay is created by specifying how long it will suspend the active Process.
halfMinuteDelay ← Delay forSeconds: 30.
shortDelay ← Delay forMilliseconds: 50
Simply creating a Delay has no effect on the progress of the active Process. It is in response to the message wait that a Delay suspends the active Process. The following expressions would both suspend the active Process for 30 seconds.
halfMinuteDelay wait.
(Delay forSeconds: 30) wait
instance creation | |
forMilliseconds: millisecondCount | Answer with a new instance that will suspend the active Process for millisecondCount milliseconds when sent the message wait. |
forSeconds: secondCount | Answer with a new instance that will suspend the active Process for secondCount seconds when sent the message wait. |
untilMilliseconds: millisecondCount | Answer with a new instance that will suspend the active Process until the millisecond clock reaches the value millisecondCount. |
general inquiries | |
millisecondClockValue | Answer with the current value of the millisecond clock. |
Delay class protocol |
accessing | |
resumptionTime | Answer with the value of the millisecond clock at which the delayed Process will be resumed. |
Process delay | |
wait | Suspend the active Process until the millisecond clock reaches the appropriate value. |
Delay class protocol |
A trivial clock can be implemented with the following expression.
[[true] whileTrue:
[Time now printString displayAt: 100@100.
(Delay forSeconds: 1) wait]] fork
The current time would be displayed on the screen once a second.