As already explained, KERNEL32 maps quite a few of its APIs to NTDLL. There are however a couple of things which are handled directly in KERNEL32. Let's cover a few of them...
Windows implements console solely in the Win32 subsystem. Under NT, the real implementation uses a dedicated subsystem csrss.exe Client/Server Run-time SubSystem) which is in charge, amont other things, of animating the consoles. Animating includes for example handling several processes on the same console (write operations must be atomic, but also a character keyed on the console must be read by a single process), or sending some information back to the processes (changing the size or attributes of the console, closing the console). Windows NT uses a dedicated (RPC based) protocol between each process being attached to a console and the csrss.exe subsystem, which is in charge of the UI of every console in the system.
Wine tries to integrate as much as possible into the Unix consoles, but the overall situation isn't perfect yet. Basically, Wine implements three kinds of consoles:
the first one is a direct mapping of the Unix console into the Windows environment. From the windows program point of view, it won't run in a Windows console, but it will see its standard input and output streams redirected to files; thoses files are hooked into the Unix console's output and input streams respectively. This is handy for running programs from a Unix command line (and use the result of the program as it was a Unix programs), but it lacks all the semantics of the Windows consoles.
the second and third ones are closer to the NT scheme, albeit
different from what NT does. The wineserver
plays the role of the csrss.exe subsystem
(all requests are sent to it), and are then dispatched to a
dedicated wine process, called (surprise!)
wineconsole which manages the UI of the
console. There is a running instance of
wineconsole for every console in the
system. Two flavors of this scheme are actually implemented:
they vary on the backend for the
wineconsole. The first one, dubbed
user
, creates a real GUI window
(hence the USER name) and renders the console in this window.
The second one uses the (n)curses library
to take full control of an existing Unix console; of course,
interaction with other Unix programs will not be as smooth as
the first solution.
The following table describes the main implementation differences between the three approaches.
Table 8-4. Function consoles implementation comparison
Function | Bare streams | Wineconsole & user backend | Wineconsole & curses backend |
---|---|---|---|
Console as a Win32 Object (and associated handles) | No specific Win32 object is used in this case. The
handles manipulated for the standard Win32 streams are in
fact "bare handles" to their corresponding Unix streams.
The mode manipulation functions
(GetConsoleMode() /
SetConsoleMode() ) are not supported.
| Implemented in server, and a specific Winelib program (wineconsole) is in charge of the rendering and user input. The mode manipulation functions behave as expected. | Implemented in server, and a specific Winelib program (wineconsole) is in charge of the rendering and user input. The mode manipulation functions behave as expected. |
Inheritance (including handling in
CreateProcess() of
CREATE_DETACHED ,
CREATE_NEW_CONSOLE flags).
| Not supported. Every process child of a process will inherit the Unix streams, so will also inherit the Win32 standard streams. | Fully supported (each new console creation will be handled by the creation of a new USER32 window) | Fully supported, except for the creation of a new console, which will be rendered on the same Unix terminal as the previous one, leading to unpredictable results. |
ReadFile() /
WriteFile() operations
| Fully supported | Fully supported | Fully supported |
Screen-buffer manipulation (creation, deletion, resizing...) | Not supported | Fully supported | Partly supported (this won't work too well as we don't control (so far) the size of underlying Unix terminal |
APIs for reading/writing screen-buffer content, cursor position | Not supported | Fully supported | Fully supported |
APIs for manipulating the rendering window size | Not supported | Fully supported | Partly supported (this won't work too well as we don't control (so far) the size of underlying Unix terminal |
Signaling (in particular, Ctrl-C handling) | Nothing is done, which means that Ctrl-C will generate (as
usual) a SIGINT which will terminate
the program.
| Partly supported (Ctrl-C behaves as expected, however the other Win32 CUI signaling isn't properly implemented). | Partly supported (Ctrl-C behaves as expected, however the other Win32 CUI signaling isn't properly implemented). |
The Win32 objects behind a console can be created in several occasions:
When the program is started from wineconsole, a new console object is created and will be used (inherited) by the process launched from wineconsole.
When a program, which isn't attached to a console, calls
AllocConsole()
, Wine then launches
wineconsole, and attaches the current
program to this console. In this mode, the
USER32 mode is always selected as Wine
cannot tell the current state of the Unix console.
Please also note, that starting a child process with the
CREATE_NEW_CONSOLE
flag, will end-up calling
AllocConsole()
in the child process, hence
creating a wineconsole with the
USER32 backend.
Another interesting point to note is that Windows implements handles
to console objects (input and screen buffers) only in the
KERNEL32 DLL, and those are not sent nor seen
from the NTDLL level, albeit, for example,
console are waitable on input. How is this possible? Well, Windows
NT is a bit tricky here. Regular handles have an interesting
property: their integral value is always a multiple of four (they
are likely to be offsets from the beginning of a table). Console
handles, on the other hand, are not multiple of four, but have the
two lower bit set (being a multiple of four means having the two
lower bits reset). When KERNEL32 sees a handle
with the two lower bits set, it then knows it's a console handle and
takes appropriate decisions. For example, in the various
kernel32!WaitFor*()
functions, it transforms
any console handle (input and output -
strangely enough handles to console's screen buffers are waitable)
into a dedicated wait event for the targetted console. There's an
(undocumented) KERNEL32 function
GetConsoleInputWaitHandle()
which returns the
handle to this event in case you need it. Another interesting
handling of those console's handles is in
ReadFile()
(resp. WriteFile()
), which behavior, for
console's handles, is transferred to
ReadConsole()
(resp.
WriteConsole()
). Note that's always the ANSI
version of
ReadConsole()
/
WriteConsole()
which is called, hence using the default console's code page. There
are some other spots affected, but you can look in
dlls/kernel to find them all. All of this is
implemented in Wine.
Wine also implements the same layout of the registry for storing the preferences of the console as Windows does. Those settings can either be defined globally, or on a per process name basis. wineconsole provides the choice to the user to pick you which registry part (global, current running program) it wishes to modify the settings for.
Table 8-5. Console registry settings
Name | Default value | Purpose |
---|---|---|
CursorSize | 25 | Percentage of cell height to which the cursor extents |
CursorVisible | 1 | Whether the cursor is visible or not |
EditionMode | 0 | The way the edition takes place in the console: 0 is insertion mode, 1 is overwrite mode. |
ExitOnDie | 1 | Whether the console should close itself when last running program attached to it dies |
FaceName | No default | Name of the font to be used for display. When none is given, wineconsole tries its best to pick up a decent font |
FontSize | 0x0C08 | The high word in the font's cell height, and the low word is the font cell's width. The default value is 12 pixels in height and 8 pixels in width. |
FontWeight | 0 | Weigth of the font. If none is given (or 0) wineconsole picks up a decent font size |
HistoryBufferSize | 50 | Number of entries in history buffer (not actually used) |
HistoryNoDup | 0 | Whether the history should store twice the same entry |
MenuMask | 0 | This mask only exists for Wine console handling. It
allows to know which combination of extra keys are need to
open the configuration window on right click. The mask
can include MK_CONTROL or
MK_SHIFT bits. This can be needed
when programs actually need the right click to be passed
to them instead of being intercepted by
wineconsole.
|
QuickEdit | 0 | If null, mouse events are sent to the application. If non null, mouse events are used to select text on the window. This setting must really be set on a application per application basis, because it deals with the fact the CUI application will use or not the mouse events. |
ScreenBufferSize | 0x1950 | The high word is the number of font cells in the height of the screen buffer, while the low word is the number of font cells in the width of the screen buffer. |
ScreenColors | 0x000F | Default color attribute for the screen buffer (low char is the foreground color, and high char is the background color) |
WindowSize | 0x1950 | The high word is the number of font cells in the height of the window, while the low word is the number of font cells in the width of the window. This window is the visible part of the screen buffer: this implies that a screen buffer must always be bigger than its window, and that the screen buffer can be scrolled so that every cell of the screen buffer can be seen in the window. |
Wine is also able to run 16 bit processes, but this feature is only supported on Intel IA-32 architectures.
When Wine is requested to run a NE (Win 16 process), it will in fact hand over the execution of it to a specific executable winevdm. VDM stands for Virtual DOS Machine. This winevdm is a Winelib application, but will in fact set up the correct 16 bit environment to run the executable. We will get back later on in details to what this means.
Any new 16 bit process created by this executable (or its children) will run into the same winevdm instance. Among one instance, several functionalities will be provided to those 16 bit processes, including the cooperative multitasking, sharing the same address space, managing the selectors for the 16 bit segments needed for code, data and stack.
Note that several winevdm instances can run in the same Wine session, but the functionalities described above are only shared among a given instance, not among all the instances. winevdm is built as Winelib application, and hence has access to any facility a 32 bit application has.
Each Win16 application is implemented in winevdm as a Win32 thread. winevdm then implements its own scheduling facilities (in fact, the code for this feature is in the krnl386.exe DLL). Since the required Win16 scheduling is non pre-emptive, this doesn't require any underlying OS kernel support.
SysLevels are an undocumented Windows-internal thread-safety system dedicated to 16 bit applications (or 32 bit applications that call - directly or indirectly - 16 bit code). They are basically critical sections which must be taken in a particular order. The mechanism is generic but there are always three syslevels:
level 1 is the Win16 mutex,
level 2 is the USER mutex,
level 3 is the GDI mutex.
When entering a syslevel, the code (in dlls/kernel/syslevel.c) will check that a higher syslevel is not already held and produce an error if so. This is because it's not legal to enter level 2 while holding level 3 - first, you must leave level 3.
Throughout the code you may see calls to
_ConfirmSysLevel()
and
_CheckNotSysLevel()
. These functions are
essentially assertions about the syslevel states and can be used to
check that the rules have not been accidentally violated. In
particular, _CheckNotSysLevel()
will break
probably into the debugger) if the check fails. If this happens the
solution is to get a backtrace and find out, by reading the source
of the wine functions called along the way, how Wine got into the
invalid state.
Every Win16 address is expressed in the form of selector:offset. The selector is an entry in the LDT, but a 16 bit entry, limiting each offset to 64 KB. Hence, the maximum available memory to a Win16 process is 512 MB. Note, that the processor runs in protected mode, but using 16 bit selectors.
Windows, for a 16 bit process, defines a few selectors to access the "real" memory (the one provided) by DOS. Basically, Wine also provides this area of memory.
The behaviour we just described also applies to DOS executables, which are handled the same way by winevdm. This is only supported on Intel IA-32 architectures.
Wine implements also most of the DOS support in a Wine specific DLL (winedos). This DLL is called under certain conditions, like:
In winevdm, when trying to launch a DOS application (.EXE or .COM, .PIF).
In kernel32, when an attempt is made in the binary code to call some DOS or BIOS interrupts (like Int 21h for example).
When winevdm runs a DOS program, this one runs in real mode (in fact in V86 mode from the IA-32 point of view).
Wine also supports part of the DPMI (DOS Protected Mode Interface).
Wine, when running a DOS programs, needs to map the 1 MB of virtual memory to the real memory (as seen by the DOS program). When this is not possible (like when someone else is already using this area), the DOS support is not possible. Not also that by doing so, access to linear address 0 is enabled (as it's also real mode address 0 which is valid). Hence, NULL pointer derefence faults are no longer catched.