Going into the Australian Bush requires the intrepid traveller to carry substantial supplies of water and fuel, as well as the normal requirements for living off-grid for extended periods. This is quite different to the norm in international overlanding where fuel and water is usually available in small towns, and is due to the very (incredibly) low population density of the Australian Bush. The population density of the Australia’s Northern Territory is 0.16 people/km2, about 1/100th of the density of Argentina with 17 people/km2, or 1/25th of Botswana with 4 people/km2, for example. So the purpose of my lists is to estimate the weight required to travel with some comfort across long desolate desert tracks, before the required fuel and water supplies are added.
Whilst my lists remain incomplete they are a useful tool to establish the the total equipment and supplies budget, and then contemplate the best method to carry everything. My current calculation shows that the normal total mass estimate is around of 575kg, including fuel and water. A rough breakdown of the categories is below.
Recovery Equipment – 50kg
Vehicle Spares / Consumables – 20kg
Tools – 30kg
Camping Equipment / Tents / Tarps – 90kg
Battery & Electrical – 50kg
Refrigerator / Slide – 45kg
Cooking Utensils – 20kg
Computers / IT / Camera – 20kg
Clothes / Blankets / Linen – 30kg
Food – 30kg
Unclassified / Toys – 20kg
Adding to the items above, is necessary to carry water sufficient for 20 days. And fuel to bridge the longer distances between services.
Now that might look like I’ve budgeted to carry a lot of stuff, but the idea is not to load up to 100% capacity before departure. But rather the calculation is intended to to allow room for growth as over time, as stuff tends to accumulate, and trophies and memorabilia will take up their share of space too. Nobody likes to climb into a vehicle and have their stuff fall out on the road because everything is packed to the roof.
So, I’m going to estimate that a total payload budget of 600kg will be sufficient. How can that payload be effectively carried across sand and rock over thousands of kilometers?
Carrying the Payload
As a starting point, the Jeep Wrangler JL Rubicon 2 door is the chosen vehicle for going bush.
From the 2020 Wrangler Specification, the Rubicon can carry a maximum payload of 1322lbs, or 600kg, in the 4 door version. The 2 door version has similar mechanical specifications but weighs about 100kg less, but I will assume that it can’t carry a greater payload than the larger 4 door version. Maximum braked towing capacity for the 2 door version is 1497kg. Let’s have a look at some of the options for carrying 600kg with a 2 door Rubicon.
In the above images I’ve considered some alternative solutions for carrying 600kg payload (in green), and the maximum usable axle load (in red) for the vehicle. My alternatives include:
Bare Vehicle – everything inside the vehicle
Roof Rack – 450kg in vehicle, 150kg on roof (and outside)
Roof Top Tent – 500kg in vehicle, 100kg of RTT (and outside)
Box Trailer – 200kg in vehicle, 400kg in trailer
Pod Trailer – 200kg in vehicle, 400kg in trailer
Teardrop Camper – 200kg in vehicle, 400kg in camper
The Wrangler 2 door is a very small vehicle and, although it is probably the most capable 4WD available “off the showroom floor”, loading it up to the maximum payload will make a very uncomfortable origami that would need to be unfolded at each camp and then intricately repacked each morning. Additionally, as the maximum rear axle payload is about 1000lbs, or 450kg, the available payload would be limited to less than the maximum vehicle payload, as there would be no way to share the weight to the front axle.
In my opinion only way to carry 600kg on a Rubicon is to distribute the weight onto both axles by using a Roof Rack.
By adding a roof rack, and possibly also side racks for fuel and a rear rack for tools and fuel, it is possible to distribute the weight onto both axles, and also increase the load volume of the Rubicon sufficiently to reasonably store the maximum vehicle payload.
The cost for roof rack system consists of a base rack of around A$1,000, guard rails at around A$500, and then vehicle specific mounting kits from around A$400. Accessories to mount shovels, high lift jacks, jerry cans, or gas bottles can be added for around A$200 per item.
Adding a roof rack will increase the loading on the front axle and especially the rear axle up to the maximum design rating of 3100lbs, or 1400kg, and will increase the tire load affecting both sand driving ability and the tire wear characteristics. Axles, wheels, and tires will be running at maximum load constantly. Adding a lift-kit to balance out the spring compression will not resolve this loading issue, and it is likely that the vehicle may end up being over-weight from a legal (insurance) perspective.
Using a roof rack will also significantly impact the dynamics of the vehicle. Adding up to 100kg onto the roof and 50kg to the outside of the vehicle will increase the overall pitch and roll as the track pushes the vehicle around. It will be very uncomfortable, and may actually become unsafe as the maximum approach and departure angles are reached.
On road, which will be the majority of the kilometres travelled, fuel economy can suffer by 10% and up to 25% according to some reports. Where tens of thousands of kilometres are at stake, and fuel is both in limited supply and expensive, it is best not to use a roof rack if there are better alternatives.
Roof Top Tent
The roof top tent suffers from the same dynamics and fuel economy issues as the roof rack, and it is also of very limited application being purely a place to sleep. If a roof top tent is fitted then the top of the vehicle can no longer be used for storing equipment.
A roof top tent costs around A$,5000, but considerably more can be spent if desired.
With the issues associated with roof top tents being the same as with roof racks and offering no other advantages, it is better to seek alternatives.
Many people have realised the benefits of an additional load carrying axle when travelling around Australia. The typical steel box trailer in the standard 6’x4′ or 7’x5′ single axle configurations lives in most suburban back yards, and has been making the journey to the summer camping holiday since forever. It has become more common recently to add off-road suspension and hitch components to make the box trailer capable of serious expeditions.
The typical suburban box trailer costs around A$1,500, but the vehicle must have a trailer hitch which can cost up to A$2,000 to install, depending on the vehicle. An off-road trailer with uprated suspension and chassis typically starts around A$5,000, but specialist camper trailers can be substantially more. Some fully fitted off-road box trailers cost upwards of A$60,000.
The design and registration of box trailers typically focusses on a gross vehicle mass (GVM) of 750kg and their Tare is typically 250kg, in the best case, leaving a payload capability of 500kg. If our total load can be distributed between vehicle and trailer then we can load the trailer with 400kg, leaving a margin of 20% remaining, and reduce the vehicle load to 200kg.
The load in a trailer is carried with a low centre of mass, so that the dynamics of the tow vehicle are not affected, and having the additional loaded trailer axle reduces the wear on the vehicle axles, wheels and tires.
However, towing a box trailer does not come for free. There is an increase in fuel consumption to be expected from towing. Depending on the size of the load carried and the amount of wind drag created by the trailer, the increase in fuel consumption may be up to 10%. This is significant, but it is much less than if a similar load were on a roof rack. And, as we now have a greater free load capacity it is possible to carry up to 100l of extra fuel as needed.
An important advantage to using a trailer is that it can be disconnected from the vehicle and left behind at a camp site, or trail head, when its contents are not needed. Through this method most of the payload associated with living does not need to accompany the vehicle on a difficult 4WD trail. This minimises the chances of breakage or damage to the payload.
The box trailer has several disadvantages. Firstly, the load is carried open and unsecured, and secondly, the payload is subject to dust and sand from both the vehicle rear wheels and the environment generally. Whilst Australia is generally safe, for piece of mind, it is best to keep valuables and equipment hidden out of sight when the trailer is left behind. So box trailer loads are usually covered by a tarpaulin or load cover. This adds to the soft security of the load, and helps to prevent dust and sand ingress, but it is time consuming to wrap and tie down the load each morning.
There are many advantages to using a simple box trailer to carry the payload, but it would be more ideal if the box trailer load could be covered by a solid lockable lid to secure the load and mitigate dust and dirt ingress.
Recently advances in plastics technology have enabled the creation of large roto-molded polyethylene structures, and companies have started to produce off-road “pod” trailers using polyethylene tubs and lids jointed like a clam shell and sealed with a gasket to produce an effective dust seal.
Typically these pod trailers incorporate all of the advantages of the box trailer, adding in the tare weight saving of a dust resistant plastic tub and sealed lid, and the aerodynamic efficiency of a smooth load top.
Many pod trailers can carry a payload of 750kg to 810kg, with their GVM being 1250kg with trailer brakes. An extreme off-road pod trailer can cost from A$13,000, and customisation and options can be added to increase the suitability for long distance expeditions.
With an appropriate off-road independent suspension, hitch, and trailer brakes, a pod trailer can follow behind a vehicle on all but the most difficult 4WD tracks. And where necessary the secure lockable pod can be left behind at a camp site or trail head.
Moving up from the box trailer or pod trailer solution, it is possible to consider a teardrop or square drop camper. The key advantage of the camper is that the question of sleeping arrangements is answered by a permanently made bed. At the end of a day, or when weather is bad there is a lot to be said for a ready-made bed.
A teardrop camper is usually a significant Tare approaching 900kg, and they can usually carry at least 400kg and up to 800kg in payload. They can easily accommodate the 400kg we need to carry. However the camper GVM will certainly be approaching 1,900kg when fully loaded. This is about 1 Tonne more than a box or pod trailer.
Teardrop campers range in price from A$50,000 and up to around A$100,000, making them potentially more expensive than the tow vehicle.
Besides the large GVM of the teardrop camper, there is a cost to transport the volume for a bed and “sleeping space” around the country. The cost comes in increased the form of increased drag and increased fuel consumption from the larger box, and in reduced space to store camping equipment, unless the potentially dirty equipment is transported on top of the clean made bed.
Conclusion and Decision
Following on from the discussion above, I have decided to go with the pod trailer solution. Although using a trailer will close off some of the more extreme trails and options, such as parts of the CSR, the flexibility to leave the pod and equipment safely behind at the campsite, and have the small tow vehicle remain relatively unmodified (no heavy duty springs, or body lifts, etc), together with the other points discussed above, make the pod trailer the best value for money.
Really, I should be doing better than this. With a background in Agile Methodologies and Waterfall Project Management, using lists is positively Neanderthal in comparison. Yet, here I am. To get things done, and to remember what needs to be planned and done , I’m writing lists.
About a month ago I asked a good friend whether he’d be interested to join me in the bush for a while, since it had become obvious the only other interested co-traveller was our family dog. His response was, “Would I like to have a copy of a mutual friend’s 23 camping lists?” To which I just laughed. I mean 23 lists… come on. How many lists to you need to leave the house for a few weeks?
I didn’t think about lists for a while. But then after consuming another 20 hours of YouTube suggestions and recommendations for overlanding or international expeditions, I could no longer hold all of the thoughts and ideas in my head. And then I realised that this mutual friend has the right idea. Put it on a list and then it is managed. Putting it on a list doesn’t get it done, but it does get it reviewed every time the list is examined.
One month into this, I’ve established 11 lists for going bush. At this stage I’ve written no lists of destinations or activities, but rather focussed entirely on what I’ll need to make sure that being out bush won’t be life threating, and will be mainly enjoyable.
So here’s my “TOP 11” List of Lists for going bush.
Vehicle – accessories and upgrades
Recovery – how to recover from vehicular stupidity
Tools – fixing things that fail
Spares – consumable items for vehicles
Camp Equipment – portable lifestyle
Cooking Utensils – to eat healthily
Camp Consumables- not quite food, but related
IT / Photography – toys related to bush activities
Planning during the past two years has been impossible. Worldwide, for everyone, so many things have changed.
I had plans, but I guess they’ve changed too. I no longer possess a licence for free movement, so leaving the country has become impossible. In the interim I’ve decided to go bush, at least until the restrictions on free movement are relaxed, and probably longer.
The Australian term “to go bush” has various definitions, including “to abandon amenity, and live rough” or “to live a simpler or more rural lifestyle”. Alternatively it can be seen as “going into hiding, to avoid authorities”.
So these will be my notes on going bush. Since I’m not photogenic, vlogs are out and the written word will have to suffice. I hope to cover my motivation for decisions on timing, routes, and gear. And when the adventure finally begins, and I am truly “gone bush”, updates on activities should come regularly wherever the Internet exists.
Just over 4 years ago, on 18th March 2018, I committed the first CP/M-IDE files into the RC2014 repository. Now that some time has passed and it has developed into a stable solution for CP/M I think it is time to fill in some details about why it was written, how it differs from other CP/M implementations, and how to reproduce images to match those in the CP/M-IDE repository.
There are several implementations of CP/M available for the RC2014. Initially, the CP/M on breadboard implemented by Grant Searle became the default implementation for the Z80 RC2014. Slightly later Wayne Warthen added support for the RC2014 to the Z80/Z180 RomWBW System. RomWBW is a very extensive and advanced set of system software, supporting many different RetroBrew machines, and in general it requires 512kB ROM and 512kB RAM to reach its full potential.
Each of these implementations has its own focus. The 9 Chip CP/M is based on simplicity, and being able to be built on a breadboard with the minimum of complexity, but it uses an occasionally unreliable 8-bit CF Card interface and has a substantially smaller TPA. Alternatively, RomWBW supports a variety of hardware including Z180 CPUs, and provides an underlying generalised architecture support which provides paged memory and many facilities but this imposes a processing overhead on I/O, and requires substantially more RAM than a typical CP/M system.
Faced with both these options, and being very interested to build my own solution, and to use my growing experiences supporting the z88dk community, I decided to build CP/M-IDE to fulfil a specific niche.
The CP/M-IDE is designed to provide support for CP/M on Z80 while using a normal FATFS formatted PATA or IDE drive. And further, to do so with the minimum of cards, complexity, and expense. Most recently, it has also become the CP/M which supports the 8085 CPU Module. Also recently, support for the standard RC2014 Pro with CF Module has been added.
Initially I chose the IDE Hard Drive Module specifically because I could use it to attach any old hard drives, aka “spinning rust”, to my RC2014, and this led to support for everything from these old 3 1/2″ hard drives, through to modern SSD or DOM solid state drives. It also supports both old and modern Compact Flash Cards in their native 16-bit mode, so readily available modern 1 and 2 GigaByte Compact Flash cards are OK. It is also possible to use SD Card to CF Card adapters with the IDE Hard Drive Module, allowing direct support of modern pluggable storage.
CP/M is a very compact Operating System and, in the most common version 2.2, it supports only serial interfaces and disk interfaces. For the RC2014 there are two standard serial Modules, being the ACIA Module and the more advanced and expensive SIO/2 Module.
As I’m quite interested in building real-time and event driven systems, in contrast to other CP/M implementations, CP/M-IDE therefore includes drivers supporting both transmit and receive interrupt based solutions, sourced from my z88dk RC2014 support package for the ACIA serial interface and the SIO/2 serial interface.
8085 CPU Module
More recently I have built a 8085 CPU Module for the RC2014 System. This is the first time that an 8085 CPU has been integrated into the RC2014 System, and it is able to work with the Z80 bus signalling required to drive the standard RC2014 Modules.
The concept remains to use the minimum of additional hardware over the entry level RC2014 Pro model. In fact just the IDE Hard Drive Module is desirable. But, the standard CF Module (and derivatives) can also be used, subject to the limited support of modern CF Cards.
IDE Hard Drive Interface
The IDE Hard Drive Module is based on the 8255 PPI device. This device was designed to work with the 8085 CPU and 8086 CPU. It is perfectly suited to supporting a 16-bit parallel IDE interface as it provides latching of signals on 3 separate 8-bit ports.
Initially I was concerned that the selection of control signal pins for the IDE interface limited the possibility for use of the 82C55 device for generalised I/O. I still think that this is an issue but, since no one has implemented further generalised solutions, the point is moot.
The IDE Hard Drive Module supports PATA hard drives of all types (including SSD IDE and DOM storage) and Compact Flash Cards and SD Card Adapters in native 16-bit PATA mode with buffered I/O being provided by the 82C55 device.
The IDE interface (or also termed diskio) is optimised for performance and can achieve over 110kB/s throughput using the FatFS library in C. It does this by minimising error management and streamlining read and write routines. The assumption is that modern IDE drives have their own error management and if there are errors from the IDE interface, then there are bigger issues at stake.
The CF Module interface can achieve over 200kB/s throughput at FATFS level, and seems to provide best performance using SD Cards in SD to CF Card Adapters. The old default RC2014 CF Module v1.3 is often unstable with modern CF Cards or with SD to CF Card Adapters. However the recent RC2014 CF Modules v2.0 has become quite reliable with all modern large and small CF Cards. If you experience problems, then seek out this recent implementation.
For both IDE interfaces, within CP/M performance is approximately half the FATFS performance because the CP/M deblocking algorithm implements a double buffer copy process where 512 Byte physical sectors found on IDE disks are converted into the 128 Byte logical disk blocks that CP/M expects.
In the ACIA builds, the receive interface has a 255 byte software buffer, together with an optimised buffer management supporting the 68C50 ACIA receive double buffer. The choice of memory size for the receive buffer is based on optimisations available by having the buffer a full “page”. Also text can be “pasted” in fairly large chunks into the CP/M command line without losing bytes.
Hardware (RTS) flow control of the ACIA is provided. The ACIA transmit interface is also buffered, with direct cut-through when the 31 byte software buffer is empty, to ensure that the CPU is not held in wait state during serial transmission. The size of the transmit interface buffer is based on free memory within the CP/M BIOS. As BIOS memory is typically reserved to start on the 256 Byte page boundary, if an update needed to consume more RAM, I would reduce the size of the transmit buffer to avoid the need to consume an additional page of BIOS memory.
In the SIO/2 build, both ports are enabled. Both ports have a 127 byte software receive buffer supporting the SIO/2 receive quad hardware buffer, and a 15 byte software transmit buffer. The transmit function has direct cut-through when the software buffer is empty. Hardware (RTS) flow control of the SIO/2 is provided. Full IM2 interrupt vector steering is implemented.
As both ACIA and SIO/2 devices have a hardware buffer for received bytes, it is important for the receiving interrupt handler to drain these buffers completely before returning execution to the program. If this is not done there is a danger that received bytes could be overrun and lost.
For the CP/M-IDE 8085 build the Serial Output (SOD) FTDI interface found on the 8085 CPU Module is enabled as the CP/M LPT: interface. This is activated by using ^p as per normal practice.
Whilst there is no support for additional hardware within CP/M itself (as there are no BDOS calls standardised), it is possible to use additional hardware in CP/M applications. Typical hardware options include the APU Module, various Sound Modules, and digital I/O Module.
There are many descriptions of Digital Research CP/M, so I won’t go into detail. It is important to know that CP/M v2.2 was in its day the most widely deployed Operating System for small computers based on the 8080, 8085, and Z80 CPUs. Later versions of CP/M supported the 8086, and 68000 CPUs, as well as providing many more system functions than the CP/M v2.2.
Whilst there have been later versions of CP/M produced, to my knowledge, there were no widely available user applications produced which could not be run on CP/M v2.2. This broad compatibility is why CP/M v2.2 is important.
CP/M v2.2 is essentially just 4 pieces of code. The BIOS (Basic Input Output System) is provided to abstract the hardware devices from the operating system. Essentially there is a limited set of BIOS commands that the BDOS can call on. These BIOS commands are implemented specifically for the characteristics each machine, and in the early days of computing it was essential that a user knew how to write their own BIOS.
The second piece of code is the Page 0 of memory, which is written by the BIOS cold boot command on initialisation. The role of this Page 0 is to provide important addresses (for both BIOS and BDOS) and to set important status registers like the I/O Byte. The Page 0 is also used to manage the 8080, 8085, and Z80 CPU interrupt vectors, and to store the command line entered by the user when an application is initialised.
The CP/M BDOS is the middle layer of the Operating System. Application programs rely on BDOS system calls to support their requirements. Here the drives (A:, B:, through to maximally P:) are opened and closed, and disk sectors are written. The BDOS does its work by calling BIOS commands on behalf of the application that is currently loaded.
Often the BDOS is combined with the CCP (Console Command Processor) into one assembly language file because both of these components are constant and they are independent of the hardware. These two components are essentially the distribution of Digital Research CP/M which was sold to the user.
The CCP is the user interface for CP/M. It provides a very small number of integrated commands, like “directory list”, “erase”, “rename”, “type” or “exit”, but its main role is to load additional commands or applications called “Transient Programs” into RAM and execute them. Often, an application loaded into the Transient Program Area (TPA) RAM will overwrite the CCP in memory as it is normal for the CCP (and BDOS) to be reloaded once an application quits.
There are third-party alternatives available for both the CCP and BDOS, and as these are loaded each time the computer is restarted it is possible to replace the default versions by alternatives if desired. Specifically for CP/M-IDE, the DRI CCP can be replaced by Microshell SH (here), or both CCP and BDOS can be replaced by NZCOM without impacting the installed system.
CP/M was developed before there was a standard implemented for computer disk drives, and every system had its own peculiarities. In order to cope with this situation each BIOS had to be written to cover the possibilities, by completing a Disk Parameter Block. Each disk type needs its own DPB, which takes space in BIOS RAM, so it made sense for CP/M-IDE to be implemented with only one type of disk supported. Additionally each drive attached by the BIOS requires a substantial Allocation Vector RAM reservation. It needs to be said that providing for unused drives in CP/M substantially increases the BIOS size, and commensurately reduces the TPA RAM for user applications and in turn their working RAM. For comparison, CP/M-IDE has 3kB more TPA RAM available for user applications than the default RC2014 CP/M implementation.
A subtle but important advantage to using only one disk type is that every disk is orthogonal, and it can be located anywhere on the underlying physical disk (ie. starting at any LBA). Also, it does not matter into which CP/M drive A:, B:, C:, or D: a disk is mounted when booting. The CP/M “system disk” looks exactly like any other disk, and every CP/M disk file can be located anywhere on the FATFS parent drive.
Further, the CP/M-IDE CCP/BDOS/BIOS operating system binaries are loaded from ROM. This is not typical, as most CP/M BIOS implementations will load the CCP/BDOS/BIOS from the first sectors (or tracks) of the first attached physical drive, and will require the system disk to be located in specific sectors of the physical drive, and they also rely on a specific allocation of LBA addressed sectors (or slices) for all additional drives.
The CP/M-IDE system supports a maximum of 4 active drives of nominally 8 MByte each. The maximum possible size of a CP/M disk is 8 MByte, due to overflow of a 16-bit calculation within the BDOS. Further each CP/M disk can support up to 2048 files as a maximum. By setting the standard CP/M-IDE disk type to be maximised both in terms of size and number of supported files there is no question of the disk storage being too small. The only limitation introduced is that up to a maximum of 4 CP/M drives can be active at any one time, which leaves us with the maximum free TPA RAM. The choice of 4 drives for CP/M-IDE was based on nominally having 1 drive for CP/M system utilities, 1 drive for application files, 1 drive for user data or source files, and 1 drive for temporary files. In practice I’ve found that working with 2 or 3 drives is the most common scenario, and often it makes sense to copy the few needed system utilities onto a working drive and work off that one drive.
CP/M-IDE is like having a 4 “floppy” drive machine (with 8MB floppy disks), and a library of up to thousands of floppy disks to choose from. Just insert the floppy disks you want to use when you want to use them. This interchangeable disk strategy is different to other RC2014 CP/M implementations that put everything into a maximum of 16 “hard” drives, at fixed LBA locations or slices, and leave them attached permanently.
As CP/M-IDE uses LBA addressing there can be as many CP/M disks stored on the IDE FAT32 (or FAT16) formatted disk as desired, and CP/M-IDE can be started with any 4 of them in any drive. Note that CP/M does not know about or care about the FAT file system. On launch CP/M-IDE is provided with an initialisation LBA for each of its 4 drives by the shell, and all future sector references to the disk (file) are calculated from these initial LBAs provided for each drive.
As the FAT32 format supports over 65,000 files in the root directory, and a similar number of files in each sub-directory, collections of hundreds or even thousands of CP/M disk files can be stored in any number of sub-directories on the FAT32 parent disk. Knock yourself out by storing every conceivable CP/M application on thousands of disks on a single 120 GByte drive. As the CP/M Operating System doesn’t store state (the CCP/BDOS is reloaded each time an application terminates), changing or reordering drives is as simple as typing exit, and then restarting with the new drives desired using following shell command: cpm filefor.A filefor.B filefor.C filefor.D.
As we can store literally thousands of CP/M disks on one FAT32 parent disk, let’s think about how to create CP/M disks, and how to store information on them. There are two main methods for building CP/M disks, being from within CP/M using native tools such as the yash shell, and alternatively from a Linux or Windows PC host with the physical FAT32 disk temporarily attached to the host. For creating and building many CP/M disks the second host based method may be faster and more convenient.
Building CP/M disks from a PC host relies on the use of the CP/M Tools software utilities package. cpmtools utilities can be used to copy executable CP/M files from your host PC, where you have downloaded them, into the CP/M disk found on your FAT32 disk.
As CP/M-IDE uses a “non-retro-standard” disk definition, cpmtools lacks the required definition in the standard distribution. The disk definition for 8MByte CP/M-IDE disks is provided below. In Linux based systems this disk definition should be appended to the host’s /etc/cpmtools/diskdefs file.
On Windows PCs, as of cpmtools 2.20, creation of a new disk does not fully extend the CP/M disk out to the full 8388608 Bytes of a fully sized CP/M disk. This means that as files are added to the CP/M disk it is possible that the host PC operating system may potentially fragment the disk as it grows it. This would be bad, as offsets are calculated from the initial file LBA and therefor the CP/M-IDE system has no way to recognise fragmented CP/M disks. Therefore, for safety, a template CP/M disk file has been provided which can be stored onto the parent disk and then copied and renamed as often as desired.
Typical usage to check the status of a CP/M disk a.cpm, list the contents, and then copy a file (e.g. bbcbasic.com) from the host to the CP/M disk, is shown below.
Building a CP/M System disk is a personal choice. There are multiple utilities and applications available, and not all of them will be relevant to your own needs. However, to get started, the contents of the RunCPM system disk can be used. An extended version can be found here.
Also, the NGS Microshell can be very useful, so it has been added to the example system disk too. There is no need to replace the default DRI CCP with Microshell. In fact, replacing it permanently would remove the special EXIT function, added to the CP/M-IDE version of the DRI CCP, used to return to the shell.
Of these applications above, the Hi-Tech C v3.09 suite continues to be updated and maintained by Tony Nicholson. Therefore it is useful to update the HITECHC.CPM.zip CP/M disk with the current release files.
When commencing a new project it can be convenient to start with a new clean working drive. Either the yash shell can be used from within CP/M to create a new drive file. The yash shell will properly extend the created file to ensure that it is contiguous on creation. Or the system drive can be temporarily attached to a PC and normal file management can be used to copy the template drive file provided, and rename the newly created drive file appropriately for the project.
Alternatively when working with a CP/M compiler, or editor, making a copy of the compiler drive file and working from that copy (rather than the original) can be quite useful.
On first boot into CP/M, mount the sys.cpm system drive and the new working drive. It can then be useful to copy some CP/M commands onto the working drive using PIP.COM, then the sys.cpm system drive does not need to be mounted on further boots. Generally XMODEM.COM is all that is necessary, as the CP/M CCP has DIR, REN, ERA, TYPE, and EXIT commands built in.
Then, on each subsequent boot-up of CP/M only the working drive in drive A: is necessary. After compiling a new project with z88dk, the work-in-progress application .COMor .bin can be uploaded to the RC2014 using XMODEM.COM and then tested. If the work-in-progress crashes CP/M or needs further work, then repeat the process as needed without danger of trashing files in any other drives.
Of course other development workflows are possible, as is simply mounting the ZORK games drive and playing an adventure game or two.
Building CP/M Software from Source
CP/M-IDE is quite unusual in that it is built with a unix like shell as the system loader. From the shell the CP/M system is started, but it is also possible to use the shell to read the FAT file system and provide directory listings, to print memory and disk sector contents, and to provide status for the attached drive. Other versions of CP/M for Z180 have file system write capability included, but due to the limited capacity (32kB) of the RC2014 ROM these additional file management functions had to be omitted from the CP/M-IDE ROM, though they are available from the yash shell application.
The chicken or the egg? In this case the z88dk is both the starting point CP/M-IDE and the finishing point for developing CP/M-IDE applications.
By default the z88dk ACIA drivers are set up to use a 15 Byte transmit buffer. This needs to be changed to a 31 Byte transmit buffer, by changing this configuration to 0x20.
Also, if you wish to enable the shadow RAM setting where the Memory Module or SC108 Module is used then this setting needs to be changed to 0x01. This will enable the RAM copy stub and shadow RAM write and read functions. This is not relevant for the 8085 CPU build (which doesn’t support relocatable jump instructions), and is disabled by default for the Z80 builds (to support the 64k RAM Module).
And finally, the ide driver is selected by using either CF (8-bit) or PPIDE (16-bit) interfaces. To use the PPIDE interface the CF Module configuration needs to be set to 0x00.
With these settings adjusted to suit the targeted hardware, the RC2014 libraries need to be rebuilt. The sure way to do this is by a full rebuild of z88dk, as both 8085 and Z80 libraries will be touched. it is done with the ./build.sh -c command from the root directory of z88dk. There are other alternatives, such as deleting the libraries that will have to be changed and executing the ./build.sh command.
As well as two compilers, a macro assembler, and a large variety of useful tools, the z88dk is in essence a library of Z80 assembly language code covering all of the standard C requirements, and providing multiple options for implementing these libraries.
However, the z88dk doesn’t have C code libraries included. These are excluded because they can take too long to compile, and z88dk already takes quite a while to build as is. However the use of external libraries, and mainly C libraries is supported through the use of the z88dk-lib tool, which can import a compiled library and allow the linker to find it when a final binary application is being prepared.
For CP/M-IDE we need to have a high quality, reliable, fully functional FAT file system implementation. The most commonly used implementation is the ChaN FatFS. This code has been modified to work effectively with the Z80, and is provided in my z88dk-libraries.
For CP/M-IDE I have elected to use the SDCC compiler with the IY version of the libraries. For the CP/M-IDE 8085 the only option is to use the SCCZ80 compiler as it supports 8085 (and 8080) compilation.
As noted above, there is insufficient ROM available in the 32kB to support the full set of FAT file system functions, so we have to build a special version that is “read only”. There is a configuration that should be set to 1 to enable RC2014 read only in the file here. Then the library can be rebuilt with the following command lines.
The FAT file system libraries are now available for z88dk so we can move on to compiling CP/M-IDE
The source code available in the RC2014 Github repository for CP/M-IDE is kept up to date. There are four versions, each tuned to suit their minimum hardware characteristics. There is no “auto identification” of additional hardware. This implementation of the CP/M operating system supports only IDE attached FAT formatted disks and 1 or 2 serial ports, so that is all that is necessary. Any optional additional hardware available is supported by drivers built into the relevant application.
From the source directory of each version the command line identified here can be issued. The resulting .ihx file (renamed as .hex) can be compared with the provided HEX file. For interest it is worth compiling with the --list option, and studying the resultant assembly listings. This gives a good overview of the quality of code produced by the two compilers, and also the amount of space required to assemble the CP/M CCP/BDOS and BIOS components.
Now we have a functioning CP/M-IDE Intel HEX file, which can be written to EEPROM and tested.
New applications can be built using either the zcc +rc2014 -subtype=cpm or zcc +cpm for Z80 targets, or for the CP/M-IDE 8085 use zcc +cpm -clib=8085 to build applications. There are example applications to test with in the z88dk examples directory including, for example, players for 8-bit sound.
Of particular interest is the yash shell, which runs on CP/M and allows full access to the underlying FAT File System. It provides all of the standard file management tools which are missing (due to space constraints) from the CP/M-IDE ROM shell. This can be found in the z88dk-ext/os-related/CPM directory, together with the instructions to compile it. It is also provided in the CP/M-IDE “system disk”.
How does it work?
This is a description of CP/M-IDE 8085 specifically. The versions for the Z80 are quite similar, and so this can also be used as a reference for their operation. However as the RC2014 8085 support is unique in z88dk it is worth noting the specifics here.
The CP/M-IDE 8085 build is based on the rc2014 target and acia85 subtype within z88dk. The 8085 CPU starts execution at address 0x0000 from /RESET, therefore the target must write an effective Page 0 including a jump to the start of code, and interrupt and trap vectors, before the main() program for the CP/M-IDE shell can be started. z88dk uses the m4 macro preprocessor tool to expand included assembly code, and the configuration files for the acia85 subtype are found in config_8085.m4.
The overall initialisation process for the acia85 subtype is found in CRT 2 startup code for the RC2014. Each target in z88dk has multiple subtypes, and each of these subtypes has its own CRT startup code specification. These startup specifications are fully expanded and can be read most efficiently by using the --list option when compiling the system.
Before diving into the startup process it is worth considering how and where drivers for the rc2014 acia85 build are obtained. As the acia85 subtype is hybrid across newlib and classic libraries within z88dk it is worth noting that most of the drivers for acia85 are obtained from the device and driver directories within the rc2014 target. However, stdio drivers for acia85 and basic85 subtypes are found in the classic library in the rc2014/stdio directory.
Further, using the characteristics of linker preferences, if we chose to override the library drivers with our own versions found within the CP/M-IDE BIOS then the library versions will be ignored. And that is the case, where we provide the ACIA, 82C55, and IDE drivers. This also means that before the main() function is started we need to copy these drivers to their correct location in RAM. This process is done by placing code in the code_crt_init section, as this code will be loaded and run prior to main() according to the memory model allocation.
Now we have our interrupt vectors completed, and the interrupt code placed with buffers initialised and ready to go. Our diskio and IDE drivers have been placed and now we can start our main shell user interface. Now we are parsing the command line using a shell system inspired by the example code by Stephen Brennan. Each of the commands implemented are self explanatory, and are mainly invoking one of the ChaN FAT file system functions. However the cpm command requires further description as this is the transition point from z88dk into DRI CP/M.
The cpm function is called with up to 4 arbitrary file names, representing the 4 CP/M disks. These file names are tested and, if all the files provided are found to exist, the base LBA of each file will be written to a specific location in cpm_dsk0_base, and processing will be handed over to the cpm_boot() function.
The _cpm_boot function is the CP/M cold boot mechanism. The CP/M cold boot will firstly toggle-out the lower 32kB of ROM to reveal a “clean” 32kB of RAM. At this point the 8085 interrupt and trap vector addresses must be written into Page 0 RAM, together with other important CP/M locations such as the I/O byte. Then control is passed to the rboot function to continue with the cold boot.
In the cboot process we should remember that the contents of the CCP/BDOS and the BIOS RAM have already been written to upper 32kB of RAM by the preamble code, so this process does not need to be repeated. This is different in the warm boot wboot process where we have to assume that the CP/M application or transient program will have overwritten the CCP and possibly also the BDOS, so we have to repeat the initialisation found in the preamble called by pboot.
As part of the cboot and wboot process, we check which CP/M disk is going to be used for our A: drive, by reading the LBA base, and then launching CP/M CCP shell by returning to the to the preamble code and falling through to _main.
This covers creation of software support for the 8085 CPU within the framework of the z88dk and also with MS BASIC 4.7. Specifically, the 8085 undocumented instructions will be covered, and some usage possibilities provided.
8085 Microsoft BASIC 4.7
The Microsoft BASIC 4.7 source code is available from the NASCOM machine. Although the NASCOM machine was a Z80 machine there were only minor changes to the original Microsoft BASIC 8080 code. Therefore it is an ideal source to use to build a 8085 based system.
Also a rc2014 target ROM subtype acia85 has been provided to allow on-the-metal embedded applications to be written. The full 32kB of ROM and 32kB RAM is then available, with the option to toggle out the ROM if needed for CP/M or similar systems.
The z88dk sccz80 C compiler is used for 8080, 8085 and Gameboy Z80 CPUs. This compiler is supported by the z88dk classic library. Over a few weeks, I reworked all of the sccz80 compiler support primitives (called l_ functions) to make them reentrant, and to optimise them for the respective CPU.
I’ve also reworked all of the z88dk string functions to support callee for the 8085 CPU. The callee calling mechanism is substantially faster than the standard calling convention. Also I’ve changed the loop mechanism for 8080 / 8085 / GBZ80 to use a faster mechanism. This consumes 5 bytes more for each function used, but reduces the loop overhead from 24 cycles per iteration to 14 cycles per iteration. Quite a substantial saving for extensively used functions like memcpy() and memset(), for example.
8085 Undocumented Instructions
Over the years since launch several very useful undocumented instructions designed into the 8085 have been found. These instructions are particularly useful for building stack relative code, such as required for high level languages or reentrant functions. However, perhaps because of corporate politics, these useful instructions were never announced, and thus were never widely implemented.
The z88dk-z80asm assembler provides synthetic instructions to simplify code for the different variants (it has also recently become a macro assembler) to simplify programming. These instructions are usually a useful sequence of normal instructions that can be issued with no side effects (eg. setting flags) that may streamline combined 8085 / z80 programming.
Discussion on the Instructions
Some things to think about (and then do).
Use the Underflow Indicator (K or UI) flag with 16 bit decrement and JP K, JP NK instructions to manage loops, like LDIR emulation, more cleanly. 16 bit decrement overflow flag K is set on -1, not on 0, so pre-decrement loop counter.
Use the LD DE,SP+n instruction with LD HL,(DE) to grab from and LD (DE),HL to store parameters on the stack. Can use this with a math library to make it reentrant, for example, and also relieves pressure on the small number of registers.
Use the LD DE,SP+n instruction with LD SP,HL to quickly set up the stack frame. For example LD HL,SP+n, DEC H, LD SP,HL to establish 256-n stack frame.
Use RL DE together with EX DE,HL to rotate 32 bit fields.
Use RL DE together with ADD HL,HL to shift 32 bit fields.
Use RL DE as ADD DE,DE to offset into tables and structures.
Use SUB HL,BC for 16 bit subtraction.
Remember EX (SP),HL provides another “16-bit register”, if SP+2 is the location of the return, and SP+4 is the location of first variable.
Learn how signed arithmetic can be improved using the K flag.
Since we know that the 8085 undocumented opcodes are available in every 8085 device they can be relied upon for any 8085 system. The challenge will be to take existing 8080 programs, such as Microsoft Basic and CP/M, and implement improvements using these 8085 specific instructions.
In reworking the z88dk sccz80 l_ primitives to make them reentrant and to optimise them for the 8085 CPU, I have found the LD DE,SP+n instruction very important. Using this instruction it is possible to use the stack as effectively as static variable storage locations. The alternative available on the 8080 (and Z80) LD HL,N , ADD HL,SP takes 21 cycles, and clears the Carry flag. With the few registers available on the 8080 losing the Carry flag to provide state causes further cycle expense, spared with the 8085 alternative.
To load a single stack byte using LD DE,SP+n , LD A,(DE) is only 4 cycles slower than loading a static byte using LD A,(**). Also, loading a stack word using LD DE,SP+n , LD HL,(DE) is only 4 cycles slower than loading a static word using LD HL,(**). Given that variables can be used in-situ from the stack or pushed onto the stack from registers rather than requiring the overhead of the value being previously loaded into the static location, this small overhead translates into about 3 stack accesses for free compared to static variables.
One small design oversight in the Program Status Word of the 8085 is however quite annoying. The flags register contains a single bit that always reads as 0. A $FFFF pushed to AF is read back as $FF7F. This means that unlike in the Z80, it is not possible to use a POP AF , PUSH AF pair as a temporary stack store, which invalidates AF as one of the only 3 additional 16-bit registers as an option, making things even tighter when juggling the stack. I’d call it annoying AF.
The RL DE and SUB HL,BC instructions are very useful to build 16-bit multiply and divide routines effectively. They have contributed to useful optimisations of these primitives. The saving in bytes over equivalent 8080 implementations has allowed for partial loop unrolling, which also speeds up the routines by reducing loop overhead. Initially, I was concerned that the SUB HL,BC function didn’t include the Carry flag. But in hindsight it is not possible to effectively carry into the registers, and using the 8 bit SUB A,C , SBC A,B instructions via the A register is the way to manage long arithmetic.
Recently the LD DE,SP+n and LD HL,(DE) or LD A,(DE) instructions were used to replace the sccz80 z80 stack access routine LD HL,n, and ADD HL,SP followed by CALL l_gint or CALL l_gchar. Also the stack store routine CALL l_pint was replaced by LD (DE),HL. These small changes to the optimisation process have substantially improved the 8085 benchmarks, in both code size and performance, and they are now often better than similar z80 benchmarks.
The next challenge was to build a CP/M-IDE version for the 8085 CPU. The ingredients are ACIA serial drivers adapted for 8085, IDE and diskio drivers for 8085, and the ChaN FatFs library compiled for 8085, plus a 8085 adapted BIOS.
When looking at the IDE drivers written previously for Z80 it was obvious that I’d gone out of my way to use Z80 instructions, which were actually slower than using 8080 instructions. So, I took the opportunity to rewrite an integrated solution for both Z80 and 8080/8085, for future maintenance.
The new CP/M-IDE 8085 code is very similar to the existing ACIA and SIO serial Z80 code, by design. I’ve tried to minimise the differences where ever possible. The remaining differences are mainly in the BIOS code, and relate to initialisation of the 8085 interrupts and the different CRT code used between Z80 and 8085 systems.
The 8080 CPU stands at the root of microprocessor development over the past 50 years. Although it was the first commercially successful device, it was followed quickly by two different processors with different bus characteristics. This is a record of interfacing one of the descendants, the Intel 8085, with peripherals and modules designed for use with the other descendant, the Zilog Z80.
All three of these devices, the 8080, the 8085, and the Z80 were implemented with 40-pin DIP packaging, which limited the number of pins they could use for bus signalling. The 8080, requiring 3 power supply voltages, was particularly limited as it didn’t multiplex the address or data lines, but rather needed to share the data lines for status information. More about the 8080 can be read at Wikipedia, or CPU Shack. I will not add to it here.
Derived from the 8080 and implemented by the same lead designers and architects, the Zilog Z80 uses four lines to signal general timing on the bus. In addition, a M1 line is used to signal that an interrupt is being processed and that an interrupting peripheral needs to provide an address (or vector) to which the CPU should jump in IM2 mode.
The Z80 rationalised the power requirements down to +5V and GND, which allowed a simpler and more explicit set of bus controls to be provided. As the Z80 implemented two address spaces, one for memory and one for Input/Output ports, it was useful to have two separate lines signalling memory access and Input/Output access. In this way a peripheral only needed to handle one of the two signals, depending on whether it was memory or a I/O address space peripheral device.
In addition the Z80 has two lines providing signalling for Read or Write. The timing was designed so that the data on the 8 data lines was valid at the point when the respective signal was deasserted. The Z80 would hold data it wanted to write or output until the write signal was deasserted, and it would latch and read the bus when reading or inputting data when the read signal was deasserted.
With only minor differences, the Memory and Input/Output lines are operated with similar timing, and this is aligned mostly with the Read and Write signals. This enabled system designers to build very simple bus interfacing for their Z80 based systems.
There are many additional features and alternatives here, around Interrupt Mode 2, timing for sampling the Ready pin which causes the Z80 to pause, and other minor timing issues. However, they are not relevant for most purposes.
Most system designers used these four signals to create memory write, memory read, I/O write and I/O read signals. Then one signal line, together with a chip-select generated by the address lines (directly in simple systems, or through logic in more complex systems) was enough to operate each component of the system.
For the 8085, the Intel architects took the bus interface in another direction. They integrated several components from the support chips for the 8080 into the silicon die, and produced new features which made the 8085 much more useful as a micro-controller than the Z80. For the bus, the major change was to multiplex the data lines with the low address lines. This step allowed them to reuse the 8 saved lines on the 40-pin DIP for other purposes.
Multiplexing the address and data lines meant that they had to add an external address latch, to capture the lower address values, before either writing data or reading data from the bus. The normal read and write lines are present and they behave in a similar manner to the Z80.
In a significantly different solution to the Z80, the 8085 uses only one line to differentiate Input/Output and Memory addresses. Using the sense of the line high or low to indicate whether the I/O address space or the memory address space is being addressed. The timing on this IO/M line is also substantially different to the Z80, where here it is valid for the entire cycle of an instruction. It does not become valid when the bus address is valid, rather it is valid from the start of the instruction through to the completion of the instruction.
This is the first significant divergence from the Z80 system bus, and it causes issues with peripherals that require an enabling signal to be provided after the address lines are stable. In most designs a decoder was required to produce signals for attached peripherals.
Generating Z80 /IORQ and /MREQ from 8085 signals
As many Z80 standard peripherals and also Motorola peripherals need to have the /IORQ line valid when the address is stable, we need to generate a Z80 compatible /IORQ (and /MREQ) signal. There are textbook “decoder” circuits available to produce the four system signals /IOR /IOW, /MEMR and /MEMW from the 8085 IO/M signal and /RD, /WR, but there is no standard solution for using the 8085 on the Z80 bus. This problem we are going to solve.
From the Z80 datasheet the /IRQ and /MREQ signals are almost exactly tied to the timing of the /RD and /WR signals. Therefore we can use /RD and /WR with some combinational logic to produce mostly correct timing for /IORQ and /MREQ. We need to have a valid signal when either /RD or /WR is low (active). If both are high, then the result should be also high (inactive). Both /RD and /WR are never active, but for convenience we can let the result be active if both are. In positive logic this would be generated by an OR gate. But with inverted logic (active low) this is implemented as an AND gate.
Result – /RD./WR
0 – Invalid state.
Intermediate Truth Table
To generate the /MREQ signal we are looking for the time when IO/M is low whilst either /RD or /WR is low. In negative logic this is an OR gate, where the signal remains high unless both /MREQ and /RD or /WR are low. So to generate /MREQ we need to provide ( /RD AND /WR ) OR IO/M.
0 – Only when both are active.
To generate the /IORQ signal we can recognise that it is simply the same /RD /WR logic but the IO/M line needs to be inverted or NOT converted. So we can generate /IORQ by ( /RD AND /WR ) OR NOT IO/M.
From this solution we can simplify the expression into either NAND or NOR gates. Taking NAND gates as the basis the solution can be simplified into 4 gates that can fit into a 7400 device.
Other Bus Timing Issues
Several Z80 peripherals use the READY signal to cause the Z80 to wait until they are ready to read data from the bus, or to write data onto the bus. The Z80 implements one wait state whenever it uses I/O instructions, to enable slow peripherals sufficient time to signal they are not READY to proceed. The 8085 does not add in the automatic wait state, so there may not be sufficient time for them to signal the CPU to wait. There are standard circuits available to add one wait state into 8085 bus cycles.
Motorola bus peripherals use an E or Enable clock to signal that they are being addressed. For the Z80 bus, this is typically implemented by inverting the /IORQ signal. However, for the 8085 using the method above, there may be insufficient time between the E (inverted /IORQ) and stabilisation of the address.
Z80 peripherals capable of Interrupt Mode 2 use the M1 signal to determine when they should place their interrupt address (vector) on the bus. The 8085 does not generate this signal, but since the 8085 does not support IM2 mode anyway this point is probably moot.
8085 CPU Module for RC2014
8085 CPU Module PCBs are available on Tindie. Combine with a Memory Module PCB to save postage.
The RC2014 Bus and Modules have been available now for some time, and the Z80 nature of the system bus provides for simplicity in the system design. There is no buffering or conversion by the CPU Module, and individual peripheral Modules are left to convert bus (or Z80) signals to suit their own requirements.
In researching the requirements for a 8085 CPU Module to work with the RC2014 Z80 bus and standard peripheral Modules, I found the Glitchworks 8085 SBC and also Alan Cox’s 8085 designs. My initial design replicated the bus interface signalling of these two designs.
After building the first version of the 8085 CPU Module I found that the Motorola 68B50 ACIA based RC2014 Serial Module didn’t work properly. This is because on the module the required E clock is derived from Z80 /IORQ timing, and the simple method of inverting IO/M as /IORQ doesn’t provide the timing needed. The 68B50 requires the bus address to be stable before E (or /IORQ inverted) is asserted.
A second version of the 8085 CPU Module was implemented, using the above method for generating the /IORQ and /MREQ signals.
The current hardware doesn’t supply a wait state to the CPU, so the hardware interface to the APU Module designed for RC2014 doesn’t work. The 8085 CPU allows only 25ns to 30ns (depending on the manufacturer specification) for assert not READY (or /WAIT). Am9511A takes 83ns to assert /WAIT.
The retro-challenge is to extend the current 8085 CPU Module design to include a wait state generator for IO instructions to support the APU Module and the UX Module.
Retrochallenge – 1st Update – 2nd October
Getting to Am9511A APU support for the RC2014-8085 machine means firstly getting the fundamental 8085 platform working.
The RC2014 is supported by the “newlib” of Z88DK, which is meant for Z80, Z180, Z80N (Spectrum Next) processors, and the 8085 is supported by the “classic” library. So this is the first time that a newlib machine is using classic lib libraries. Confusing? Yes I find it so.
Anyway the trick is just getting the right pieces to link together. Having ZIF ROM and TL866CS Programmer helps with fast programming cycles.
Retrochallenge – 2nd Update – 3rd October
Now the z88dk RC2014-8085 ROM build using the ACIA Serial Module is working (along with the RAM build supported by Basic), I’ve spent the past days tidying the ACIA builds around my various repositories, to keep everything consistent. So now my BASIC builds for both 8085 and Z80 are aligned with RC2014 HexLoadr BASIC, CP/M-IDE ACIA, and also the z88dk ACIA newlib device code. Also took the time to clean up some the SIO device code too.
@suborb is working on the z88dk classic library crt0 and compiler intrinsics, as they’ve been stuck in both classic and newlib and are a bit disorganised. Hopefully the result will be one set that can be used for both compilers (zsdcc, and sccz80) and both libraries, across multiple machines (8080, 8085, GBZ80, Z80, Z180, Z80N, etc) which will make maintenance much easier.
Waiting now for China to come back from National Day holiday, so I can get started with new hardware.
8085 Wait State Generator
Retrochallenge – 3rd Update – 8th October
As noted above the window of opportunity for a 8085 bus peripheral to signal not READY is very short. In fact is is no more than 30ns from fall of the ALE signal, and this is 30ns before the /IORQ signal is even enabled.
Timing information from the 8085 datasheet shows tLRY as maximum 30ns, and tLC as minimum of 60ns.
To be able to connect devices designed for the Z80 bus to the 8085 CPU we will need to implement a wait state generator. In the best case this will only affect I/O cycles, and will not slow down normal memory read and write cycles.
Designing the 8085 /IORQ Wait State Generator
As the need to generate a wait state was well known at the time of release of the 8085, several sources include the information required for the design of a basic solution. It is left to the reader to determine how to use the created wait state though.
For our purposes we need to have a wait state generated only for peripheral devices, accessed using the I/O instructions. Therefore we can modify the above circuit to only generate a wait state when the I/O address space is active, or when the external Z80 bus /WAIT signal is active. The below circuit produces a /READY signal that provide 1 wait state whenever the I/O address space is active, and can continue to produce wait states until the /WAIT signal is de-asserted.
As the static RAM / EEPROM memory devices we are using are not sensitive to the timing of the /MREQ signal, the NAND gates assigned to generate a correct Z80 /MREQ have been recovered and reused in the implementation of the wait state generator. Therefore the revisions required only one additional device on the PCB. Based on this design a revised 8085 CPU Module was created, and ordered. Due to arrive around October 18th, which won’t leave much time to finish before the end of the RetroChallenge. It will be a rush, as usual.
The new 8085 CPU Module PCB arrived, so wasting no time I’ve build one up to test. And it works!
It is interesting to look at the signals actually appearing on the RC2014 Bus during the operation of the APU. Here we have a floating point read from the APU, 4 bytes, where the wait state generator produces sufficient delay (1 wait state) to allow the APU to generate its own /WAIT signal for the last two bytes.
The floating point write cycle is similar but the duration of the /WAIT signal from the APU is longer, and the APU needs to assert it on every byte written. Note that tRYH is 0ns, so there is no need to hold the /READY signal beyond the clock rise point.
To support the Am9511A APU Module the /WAIT signal has to be patched to the USER1 Pin (if using the standard RC2014 backplane), which allows the Am9511A to extend the single wait state generated by the 8085 CPU Module for as long as the APU needs.
I’ve prepared a specific version of MS Basic 4.7 for the 8085 CPU Module when used with the Am9511A APU Module. Initial testing is working. It is looking very good to achieve the RetroChallenge goals. Please read further at the 8085 Software post for more information.
With the Wait State generator functioning, it is now possible to use the UX Module for a VGA screen and PS/2 Keyboard.
Retrochallenge – 6th (Final) Update – 26th October
Rework of z88dk classic 8080/8085/gbz80 library l_ functions.
When working with the 8085, the biggest issue is the continual pressure on the few CPU registers. Alongside the 8-bit accumulator a register and the 16-bit accumulator hl registers we have only two additional register pairs that can be used, the bc and de registers. This gives the system programmer few options but to use static memory locations to store intermediate values, which leads to non-reentrant code.
Having non-reentrant code is normally not a problem, but it does lead to issues when multiple threads (or tasks) are trying to use the CPU at the same time, for example when a multi-tasking operating system is to be supported. So it is useful to try to build reentrant functions that use the stack for storage of intermediate values, rather than static memory locations.
The designers of the 8085 had this in mind when they designed the additional functions found the 8085 silicon. The “new” instructions make it very efficient to build stack relative functions (compared to the 8080), and this relieves some pressure on the small number of registers.
However, there was one oversight made by the designers, as the 8085 af register pair cannot be used, in contrast to the z80, to pop and push arbitrary words on the stack. This reduces the number of available 16-bit registers by 1 of possible 4. There is one flag bit that always reads as 0, which is an subtle but annoying limitation of the 8085.
As background, some of these functions originate from the 1980s and 1990s in the Amsterdam Compiler Kit, and haven’t been updated or improved for the past 20 years. They weren’t broken. But they were in need of some attention.
So this update is the final one in the October 2021 RetroChallenge. All the new functions are checked in and are now part of the z88dk.
My club uses pneumatic systems to turn the ISSF Targets, which are controlled by a timing system. One of the members asked me to help build a phone interface for the systems.
The systems are used for many courses of fire, and there are quite a few options to manage. On the front panel there is a RESET, which is tied to the CPU RESET, and a FACE button which returns the targets to face the shooter for scoring.
It turns out that the retired systems are based on a 8085 CPU, in the classic minimum configuration with an 8155 providing 256 Bytes of RAM, and input and output ports. There is a 2732 UV PROM holding the program.
So, how do we get these devices online? My thoughts are to add a serial port so that the system can be controlled remotely, then to use an additional WiFi enabled device which can present a web interface to the Range Officer to control proceedings.
First step is to see what is going on under the hood here. So using the TL-866 the binary code on the ROM was read, and then using z88dk-dis the existing code could be interpreted.
It was interesting to see a very simple method of operation in the existing ROM. The system can only change course of fire if it is RESET, when it reads the position of the switches, and then halts awaiting an interrupt to trigger the course of fire. When the string is finished it will return to repeat the same course of fire.
Timing was based on a delay circuit providing 500ms of delay per unit. Perhaps it is not 100% accurate, but good enough for the application.
I believe that I found a bug that has been latent in the device for the last 40 years. It seems that an address byte was reversed, which would cause a jump into empty addresses. Not sure why no one realised that previously.
Building Serial Interface
I’m planning to build a simple serial interface which will read a character, and then change the course of fire based on that character. Initialising the course of fire can be then done by the web interface, by triggering an interrupt, or by using the wired front panel interface.
After asking the experts I learned that the SID/SOD pins on the 8085 can be used as a bit-bang serial port. In fact that is the standard way of building a serial port for early systems. The code for building serial transmission is included in the early application notes.
The serial code works perfectly at 9600 baud on this 3MHz system. Since only one character will be received and a few transmitted on boot, there are no performance issues to consider.
I’ve written the upgrade code to replicate the front panel selection process, and to allow the system to behave exactly as before when no serial input is available. When a serial command is available, which is triggered by activity on the RST6.5 line, then the system will set a different course of fire than is shown on the front panel. The string can be triggered either by the front panel, or by the interrupt related to the serial interface.
ESP-32 Web Interface
Following a bit of a search the Adafruit HUZZAH32 Breakout presented itself as the best solution to web enable the Target Turner. It can be powered by 5V, and the RX is protected against 5V input by a diode.
The physical interface is going to be a FTDI Basic style connector. Using this connector will allow me to best the 8085 first, and then build the web interface and test separately from the Target Turner. The last step will be to integrate the two devices into a system.
Using the simple serial character interface, it should be possible to present an active web page to the Range Officer.
There are many, eg, tutorials on how to build active web pages using the ESP-32 and WebSockets.
Over the past few years I’ve implemented a number of interfaces for Z80 peripherals based on the principal of the interrupt driven ring buffer. Each implementation of a ring exhibits its own peculiarities, based on the specific hardware. But essentially I have but one ring to bring them all and in the darkness bind them.
This is some background on how these interfaces work, why they’re probably fairly optimal at what they do, and things to consider if extending these to other platforms and devices.
The ring buffer is a mechanism which allows a producer and a consumer of information to do so with a timing to suit their needs, and to do it without coordinating their timing.
The Wikipedia defines a circular buffer, or ring buffer, as a data structure that uses a single fixed-size buffer as if it were connected end-to-end. The most useful property of the ring buffer is that it does not need to have its elements relocated as they are added or consumed. It is best suited to be a FIFO buffer.
More recently, I’ve been working with Z80 platforms and I’ve taken that experience into building interrupt driven ring buffer mechanisms for peripherals on the Z80 bus. These include three rings for three different USART implementations, and a fourth ring for an Am9511A APU.
But firstly, how does the ring buffer work? For the details, the Wikipedia entry on circular buffers is the best bet. But quickly, the information (usually a byte, but not necessarily) is pushed into the buffer by the producer, and it is removed by the consumer.
The producer maintains a pointer to where it is inserting the data. The consumer maintains a pointer to where it is removing the data. Both producer and consumer have access to a count of how many items there are in the buffer and, critically, the act of counting entries present in the buffer and adding or removing data must be synchronised or atomic.
8 Bit Optimisation
The AVR example code is written in C and is not optimised for the Z80 platform. By using some platform specific design decisions it is possible to substantially optimise the operation of a general ring buffer, which is important as the Z80 is fairly slow.
The first optimisation is to assume that the buffer is exactly one page or 256 bytes. The advantage we have there is that addressing in Z80 is 16 bits and if we’re only using the lowest 8 bits of addressing to address 256 bytes, then we simply need to align the buffer onto a single 256 byte page and then increment through the lowest byte of the buffer address to manage the pointer access.
If 256 bytes is too many to allocate to the buffer, then if we use a power of 2 buffer size, and then align the buffer within the memory so that it falls on the boundary of the buffer size, the calculation for the pointers becomes simple masking (rather than a decision and jump). Simple masking ensures that no jumps are taken, which means that the code flow or delay is constant no matter which place in the buffer is been written or read.
Note that although the number of bytes allocated to the buffer is 256, the buffer cannot be filled completely. A completely full 256 byte buffer cannot be discriminated from a zero fullness buffer. This does not apply where the buffer is smaller than the full page.
With these two optimisations in place, we can now look at three implementations of USART interfaces for the Z80 platform. These are the MC6580 ACIA , the Zilog SIO/2, and the Z180 ASCI interface. There is also the Am9511A interface, which is a little special as it has multiple independent ring buffers, and has multi-byte insertion.
To start the discussion, let us look at the ACIA implementation for the RC2014 CP/M-IDE bios. I have chosen this file because all of the functions are contained in one file, which provides an easier overview. The functions are identical to those found in the z88dk RC2014 ACIA device directory.
Using the ALIGN key word of the z88dk, the ring buffer itself is placed on a page boundary, in the case of the receive buffer of 256 bytes, and on the buffer size boundary, in the case of the transmit buffer of 2^n bytes.
Note that although where the buffer is smaller than a full page all of the bytes in the buffer could be used, because the buffer counter won’t overflow, but I haven’t made that additional optimisation in my code. So no matter how many bytes are allocated to a buffer, one byte always remains unused.
Once the buffer is located, the process of producing and consuming data is left to either put or get functions which write to, or read from the buffer as and when they choose to. There is no compulsion for the main program flow to write or read at a particular time, and therefore the flow of code is never delayed. This is optimum from the point of view of minimising delay and maximising compute time. Additional functions such as flush, peek, and poll are also provided to simplify program flow, and init to set up the peripheral and initialise the buffers on first use.
With the buffer available then the interrupt function can do its work. Once an interrupt from the peripheral is signalled, the interrupt code checks to see whether a byte has been received. If not then the interrupt (in the case of the ACIA and ASCI) must have been triggered by the transmit hardware becoming available.
If in fact a byte has been received by the peripheral then the interrupt code recovers the byte, and checks there is room in the buffer to store it. If not, then the byte is simply abandoned. If there is space, then the byte is stored, and the buffer count is incremented. It is critical that these two items happen atomically, which in the case of an interrupt is the natural situation.
If the transmission hardware has signalled that it is free, then the buffer is checked for an available byte to transmit. If none is found then the transmit interrupt is disabled. Otherwise the byte is retrieved from the buffer and written to the transmit hardware while the buffer count is decremented.
If the transmit buffer count reaches zero when the current byte is transmitted, then the interrupt must disable further transmit interrupts to prevent the interrupt being called unnecessarily (i.e. with the buffer fullness being empty).
Both the SIO and ASCI have multi-byte hardware FIFO buffers available. This is to prevent over-run of the hardware should the CPU be unable to service the receive interrupt in sufficient time. This could happen if the CPU is left with its general interrupt disabled for some time.
One additional feature worth discussing is the presence of a transmit cut-through, which minimises delay when writing the “first byte”. Because the Z80 processor is relatively slow compared to a serial interface, it is common for the transmit interface to be idle when the first byte of a sequence of bytes is written. In this situation writing the byte into the transmit buffer, and then signalling a pseudo interrupt (by calling the interrupt routine) would be very costly. In the case of the first byte it is much more effective simply to cut-through and write directly to the hardware.
For the ring buffer to function effectively, the atomicity of specific operations must be guaranteed. During an interrupt in Z80 further interrupts are typically not permitted, so within the interrupt we have a degree of atomicity. The only exception to this rule is the Z80 Non Maskable Interrupt (NMI), but since this interrupt is not compatible with CP/M it has never been used widely and is therefore not a real issue.
Across the three implementations there are three different Z80 interrupt modes in play. The Motorola ACIA is not a Zilog Z80 peripheral, so it can only signal a normal interrupt, and can therefore (without some dirty tricks) only work in Interrupt Mode 1. For the RC2014 implementation it is attached to INT or RST38 and therefore when an interrupt is triggered it is up to the interrupt routine to determine why an interrupt has been raised. This leads to a fairly long and slow interrupt code.
The Z180 ASCI has two ports and is attached to the Z180 internal interrupt structure, which works effectively similarly to the Z80 Interrupt Mode 2, although it is actually independent from the Z80 interrupt mode. Each Z180 internal interrupt is separately triggered, however it still cannot discern between a receive and a transmit event. So the interrupt handling is essentially similar to that of the ACIA.
The Zilog SIO/2 is capable of being attached to the Z80 in Interrupt Mode 2. This means that the SIO is capable of being configured to load the Z80 address lines during an interrupt with a specific vector for each interrupt cause. The interrupts for transmit empty, received byte, transmit error, and receive error are all signalled separately via an IM2 Interrupt Vector Table. This leads to concise and fast interrupts, specific to the cause at hand. The SIO/2 is the most efficient of all the interfaces described here.
For interest, the Am9511A interface uses two buffers, one for the one byte commands, and one for the two byte operand pointers. The command buffer is loaded with actions that the APU needs to perform, including some special (non hardware) commands to support loading and unloading operands from the APU FILO.
A second Am9511A interface also uses two buffers, one for one byte commands, and one for either two or four byte operands. This mechanism in not as nice as storing pointers as in the above driver, but is required for situations where the Z180 is operating with paged memory.
I’ve revised this above solution again and do it with three byte operand (far) pointers, as that makes for a much simplified user experience. The operands don’t have to be unloaded by the user. They simply appear auto-magically…
I pulled a Sun Microsystems Ultra5 machine out of the e-waste some time ago, and have been running various versions of debian sparc or Ubuntu on it for the last few years. The final version was debian Wheezy, the current old old stable.
However, since there is no further work on the debian old old stable, and as there was no working https support in any browser, it was time to upgrade to a more modern release. But, for sparc processors I couldn’t find anything suitable. Solaris 10 was last upgraded in 2013. A path was illuminated when I read this email from John Paul from June 2016, asking the 82 remaining users of debian sparc distribution to migrate to the Sparc64 port. I guess I was one of those last 82 hold outs. And that was 2 years ago. So, Sparc64 became the target port.
The Ultra5 I pulled from the e-waste has been improved over the years, and it is now no longer a machine that could have be purchased from Sun. I’ve added 1GB of 50ns RAM, by cutting (hacking in the literal sense) the 2nd hard drive carrier to make space, and have upgraded the CPU to 440MHz, which was only supported in the Ultra10.
I disconnected the jet engine cooling fan, and replaced it with a quiet slow fan sitting on top of the CPU heat sink, and replaced the hard drive by a 64GB PATA SSD.
I’ve also added a PGX64 video card and a USB card to enable modern devices to be supported.
My final hack was to convert the NVRAM to use a VERY LARGE battery. The NVRAM is used to store the MAC address, and other important system configurations. The standard chip has about a 2 year lifetime, if the machine is mainly turned off. The new Lithium Ion CR123A battery should last about 150 years.
Following a number of false starts, the upgrade to Sparc64 went very easily. The April 4th netinstall ISO is good, and can be used as a reference. Of course newer ISOs will be regularly provided, but at least I’m sure that a working machine can be built from the Internet Archive April 4th snapshot. From the OpenBoot command line.
> boot cdrom expert libata.dma=0
The instructions for the upgrade are very standard debian, using the netinstall ISO. The only special instruction is to enter the archive mirror details.
The installer automatically realises that the hard disk controller is incapable of DMA and configures it to PIO4 mode. Later, the radeon modesetting can be prevented by adding an options line to the local.conf file.
At this point you should have a working Sparc64 installation.
Using the PGX64
The PGX64 has some additional memory, which allows higher screen resolutions than the inbuilt PGX24 graphics. But, unfortunately, it is no faster than standard.
In order to get it to work without conflict, it is necessary to disable to inbuilt PGX24 device, located on PCI Bus B location 2, by configuring the pcib-probe-list.
Building a Desktop
Getting the Ultra5 to be a desktop machine again requires a working X11 graphics adapter. The PGX64 (and the inbuilt PGX24) graphics use the ATI Rage chip, supported by the mach64 driver.
> sudo apt-get install xserver-xorg-video-mach64
Unfortunately, sometime around 2013, the mach64 driver support disappeared. Around the time that the security aspects of Linux kernel were tightened.
Loading the mach64 driver, which is still supported on Sparc64, produces an error when loading.
From /var/log/Xorg.0.log, the driver is unable to map its mmio aperture.
[ 84.251] (II) MACH64(0): Creating default Display subsection in Screen section
"Default Screen Section" for depth/fbbpp 24/32
[ 84.251] (==) MACH64(0): Depth 24, (--) framebuffer bpp 32
[ 84.252] (==) MACH64(0): Using XAA acceleration architecture
[ 84.252] (EE) Unable to map mmio aperture. Invalid argument (22)
[ 84.252] (WW) MACH64: Mach64 in slot 2:1:0 could not be detected!
[ 84.252] (II) UnloadModule: "mach64"
[ 84.253] (EE) Screen(s) found, but none have a usable configuration.
Fatal server error:
[ 84.253] (EE) no screens found(EE)
The only options are to rebuild a kernel with the security disabled, or to find another method of getting a working display driver.
Fortunately, it is possible to use the old framebuffer method for driving the ATI Rage graphics chip. A xorg.conf needs to built, to direct the xserver to load the fbdev driver.
This above xorg.conf gets the required outcome. A listing from Xorg.0.log below.
[ 704.327] (II) LoadModule: "fbdev"
[ 704.328] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so
[ 704.329] (II) Module fbdev: vendor="X.Org Foundation"
[ 704.329] compiled for 1.19.0, module version = 0.4.4
[ 704.329] Module class: X.Org Video Driver
[ 704.329] ABI class: X.Org Video Driver, version 23.0
[ 704.329] (II) FBDEV: driver for framebuffer: fbdev
[ 704.330] (II) Loading sub module "fbdevhw"
[ 704.330] (II) LoadModule: "fbdevhw"
[ 704.332] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
[ 704.333] (II) Module fbdevhw: vendor="X.Org Foundation"
[ 704.333] compiled for 1.19.6, module version = 0.0.2
[ 704.333] ABI class: X.Org Video Driver, version 23.0
[ 704.335] (**) FBDEV(0): claimed PCI slot 2@0:1:0
[ 704.335] (II) FBDEV(0): using default device
[ 704.336] (**) FBDEV(0): Depth 24, (--) framebuffer bpp 32
[ 704.336] (==) FBDEV(0): RGB weight 888
[ 704.336] (==) FBDEV(0): Default visual is TrueColor
[ 704.336] (==) FBDEV(0): Using gamma correction (1.0, 1.0, 1.0)
[ 704.337] (II) FBDEV(0): hardware: ATY Mach64 (video memory: 8176kB)
[ 704.337] (II) FBDEV(0): checking modes against framebuffer device...
[ 704.337] (II) FBDEV(0): mode "1440x900" ok
[ 704.337] (II) FBDEV(0): mode "1280x1024" ok
[ 704.337] (II) FBDEV(0): checking modes against monitor...
[ 704.338] (--) FBDEV(0): Virtual size is 1440x1024 (pitch 1440)
[ 704.338] (**) FBDEV(0): Default mode "1440x900": 106.5 MHz (scaled from 0.0 MHz), 55.9 kHz, 59.9 Hz
[ 704.338] (II) FBDEV(0): Modeline "1440x900"x0.0 106.50 1440 1520 1672 1904 900 903 909 934 -hsync +vsync (55.9 kHz d)
[ 704.338] (**) FBDEV(0): Default mode "1280x1024": 108.0 MHz (scaled from 0.0 MHz), 64.0 kHz, 60.0 Hz
[ 704.338] (II) FBDEV(0): Modeline "1280x1024"x0.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz d)
[ 704.338] (**) FBDEV(0): Display dimensions: (518, 324) mm
[ 704.339] (**) FBDEV(0): DPI set to (70, 80)
[ 704.339] (II) Loading sub module "fb"
[ 704.339] (II) LoadModule: "fb"
[ 704.340] (II) Loading /usr/lib/xorg/modules/libfb.so
[ 704.341] (II) Module fb: vendor="X.Org Foundation"
[ 704.341] compiled for 1.19.6, module version = 1.0.0
[ 704.342] ABI class: X.Org ANSI C Emulation, version 0.4
[ 704.342] (**) FBDEV(0): using shadow framebuffer
[ 704.342] (II) Loading sub module "shadow"
[ 704.342] (II) LoadModule: "shadow"
[ 704.343] (II) Loading /usr/lib/xorg/modules/libshadow.so
[ 704.344] (II) Module shadow: vendor="X.Org Foundation"
[ 704.345] compiled for 1.19.6, module version = 1.1.0
[ 704.345] ABI class: X.Org ANSI C Emulation, version 0.4
[ 704.345] (==) Depth 24 pixmap format is 32 bpp
[ 704.392] (==) FBDEV(0): Backing store enabled
[ 704.396] (**) FBDEV(0): DPMS enabled
That is all that is required to get the desktop working.
Experimenting with both xfce and lxde, the lxde desktop is clearly faster. But, unfortunately neither desktop is particularly workable as the framebuffer display driver is quite slow. Responsiveness compared to debian Wheezy, whic￼h used the mach64 accelerated driver, is poor.
Just purchased an old Sun XVR-100 PCI adapter to give it a go. The Sun XVR-100 is an ATI Radeon 7000 64 MByte board that contains a SUN ROM to allow it to be recognised by OpenBoot, and to work in the Sun environment.
After configuring the OpenBoot to boot with the XVR-100 as the default screen and, to avoid conflicts, disabling the internal PGX graphics PCI interface, we are welcomed by the following boot screen.
Ultra5 – OpenBoot with XVR-100
So it is looking good! But unfortunately, the radeon driver and radeonfb drivers are not working properly. The first sign of trouble is early in dmesg when BAR locations can’t be allocated.
And then again later in the boot sequence the radeonfb driver complains that it can’t map the XVR-100 ROM and being unable to claim BAR 6 to assign a bridge window.
And this leads to the xorg xserver loading the radeon driver but then being unable to properly address the XVR-100, so it bails out leaving us with no X screen. Luckily, the console is still working.
Ever since I can remember, I’ve been substantially myopic or short sighted. As a child, I would lie awake at night waiting for any kind of night-time creature to emerge from the dark blur and eat me, before I had a chance to see it and run. But luckily, when wearing the prescribed lenses, my corrected visual acuity has always been on the better side of average. This has meant that I’m acutely aware of what good sight look like, and what it looks like when it is bad.
Growing up, I remember each new prescription would snap the twigs in distant trees back into my consciousness. Something that most people walking around blissfully with uncorrected vision would never perceive.
So for the past 40 years, I’ve woken up, put on my glasses, lived an entire life, removed my glasses, and gone back to sleep. But sadly, last year, something changed. I contracted the dreaded presbyopia disease. Presbyopia is not really a disease. But it is a sure sign that I was getting old. Really old. Old enough that for the first time in my life I needed to have reading glasses, as well as normal distance glasses. This is a major problem, as I was always walking around with the wrong glasses on my face.
So why not bifocal lenses, or for that matter graduated progressive or multi-focal lenses? Well for me it comes down to visual acuity. I am not at all happy to have parts of my vision obscured, and need to look at people down my nose, or whatever is needed to get the small piece of corrected vision between me and the object of interest. So, I needed a viable solution.
I’ve written here about obtaining my personal cyber eyes because there are plenty of medical reports and advertorial information sources out there, speaking highly of the outcomes. But, not very many individuals have actually written about their own experience of vision enhancement.
PRK or LASIK
For some time I’ve considered, and discarded, the idea of PRK or LASIK, as I believe it is a bit of a bodge at best, and a long term science experiment at worst. In my view, scarring the cornea to adjust the optical characteristics of the eye, when it is the lens that is incorrect, just seems in every way wrong headed. Directly adjusting the lens characteristics should be the essence of the solution. Reports of the reduction in dilated low-contrast visual acuity (i.e. the nighttime) from LASIK do not reduce my perception that it is a bodge.
Perhaps 15 years ago I considered the idea of getting intraocular lenses which, although being very expensive, seemed like the right way to solve my problem of myopia, with no other vision issues. So 6 months ago with this resolution in mind, off to the surgeon I went.
Intraocular Lens (IOL)
Following a very short discussion, the surgeon disqualified me as a candidate for the intraocular lens (IOL) procedure. This relates back to the original reason why I presented, being presbyopia. Simply put, there is no reason place an auxiliary lens adjacent to an aged human lens. For younger people the IOL is IMHO the right way to go, to preserve all of the options for future surgical advancement. But, there is another procedure that is prescribed for older presbyoptic myopia suffers, like me.
Cataract or Clear Lens Replacement (CLR)
Once you have the onset of presbyopia, then there is little that can be done with the existing human eye lens. Because of weakness in the muscles of the eye, it becomes a fixed focal distance device, that suffers from UV ageing and degeneration. At a certain age, most (all) people begin to suffer from changes to the lens through clouding, which is termed cataract. The signs of cataract development can be detected from when you’re about 50 in most people, although the symptoms in vision may not be clearly apparent for another 15 to 20 years.
Following up on the discussion with my surgeon, his team had found the indication of impending cataract in my lenses. This means that at some stage within the next 20 years I would need to have cataract operations to correct my vision. So the question was simply, when?
Coincidentally, the conversation I was having at that point was around Clear Lens Replacement, which is a cosmetic surgery undertaken to remediate the vision of people without the signs of lens degeneration and cataracts. The surgical procedure for Clear Lens Replacement and Cataract Removal is identical (for all intents and purposes, noting I’m not an optical surgeon and there are certainly things I don’t know about).
The procedure consists of making a small 1.5mm to 2mm long slice in the edge of the cornea with the iris fully dilated. The surgeon then slices up the old lens, and vacuums it out of the lens pouch. He then injects a self unfurling lens through the slit and tucks the edges safely under the iris. The operation takes about 20 minutes under strong sedation, and is not accompanied by any pain, or even discomfort (in my case).
Waking up from the snooze, it is incredible to actually see things sharply, though everything is clouded and somewhat unstable. Later it became obvious that the initial blur was mainly caused by the plastic pirate eye-patch I got to wear home (and at nights for a week).
In my case, it took nearly 3 days before the iris dilating drugs wore off, and my eye could function properly in the presence of strong light. This issue led to two symptoms. Initially my iris was so dilated that stray light was entering the optical pathway, and causing “lens flare”. Later, there was just a sensitivity to outside light intensity. By the 4th day this effect had worn off, and my vision was pretty much perfect.
The interim two weeks between the two operations, with just one eye corrected, were quite difficult. My eyes had nearly 4 dioptre difference in prescription, and the eye with the stronger prescription was operated on first. I tried to use my normal glasses with one lens popped out. This meant that that my brain had to accommodate an 8 dioptre change in retinal image size in my corrected eye with the image presented by the uncorrected eye, to be able to integrate binocular vision. Basically, I couldn’t do it. So for reading I used a piece of cardboard in my reading glasses to obscure sight in my corrected eye, and for outdoors I just left my uncorrected eye to fend for itself. Possibly using a contact lens in the uncorrected eye may work if needed, as a contact lens impacts the retinal image size to a much lesser degree than glasses.
Well it is now one month since my first operation, and two weeks since my second. And I have to report that the procedure was a success. My visual acuity is sufficiently high that I don’t need any distance correction. I can read two lines below the 6/6 (or 20/20 in Imperial) which is the equivalent of 6/4, the vision of a young person.
My surgeon was aiming for -0.25 dioptre in both eyes, as the margin for error from the mechanical calculation is 0.5 dioptre. Better surgeons will use their experience to temper the calculation and prescription and have a higher chance of getting a good outcome. My right eye (after two months) has settled down from -0.5 immediately after surgery to -0.25 dioptre. My left eye is at +0.25 after a month, and its resolving capability has been slightly improved. This is a very good outcome, and the surgeon is very happy with himself.
So how do my eyes work in the short range with effectively a fixed focus? Well this was the big question that I was worried about before this whole endeavour. Would I be able to see the speedometer whilst driving? Could I read my wristwatch, or see my phone? Well there the answer is yes, mostly. It is amazing (to me at least) how closely the human eye resembles a pinhole camera in practice, and doesn’t need to be actively focused. Although there is no escaping the need to use reading glasses for detailed close work, particularly at night.
My eyes had quite different prescriptions, so my surgeon installed products of quite different types, from different manufacturers. I’m sure my experience is not typical so I’ll note down the issues I’ve seen over the past few weeks.
Right eye was corrected for substantial myopia, with a PhysIOL MicroPure aspheric lens with a square edge. The square edge is to prevent the regrowth of lens cells from interfering with the replacement lens surface.
I find this square edge causes total internal reflection and halo effects in darkness with strongly off centre lighting. An example of the issue is down-lighting in a lift. I’m told this halo effect will disappear within 6 months, as the eye assimilates the lens (I presume), and reduces the interface TIR effect.
Zeiss AT Torbi 709M
Left eye was corrected for astigmatism and myopia, with a Zeiss AT Torbi 709M toric lens. The lens is very comfortable and doesn’t have the same reflection issues as the right eye, but potentially I’ll have to watch for regrowth of human lens cells which would obscure my vision.
An acquaintance experienced cell regrowth. He noticed cling wrap being layered across his vision from about 20 months post operation. The resolution is to blast the regrowing lens cells with a femtosecond laser to burn them away. This is done in the surgeon’s chair. During the original operation some human lens cells are left to support the new lenses in a pocket, but after the new lens has been grown or scarred into place, these old cells can be safely blasted out of the way. Vision is actually improved by removing them entirely.
Within 24 hours of left eye operation, I noticed a colour perception effect, where my left eye was seeing colours much (somewhat) colder than my right eye. A bit like the difference between a daylight globe and a warm-white globe. I was worried I was “seeing things.” It was only after I obtained the technical details of the lenses the surgeon had used, it became clear that the right eye has a UV and blue light filtered lens, but the left eye lens is unfiltered.
There is no clear preference among surgeons as to which is better, with unfiltered vision potentially leading to better sleep, but blue filtered vision being more closely aligned to young vision. I actually prefer my cold eye colours, but I also prefer daylight globes in my home. In normal daily binocular vision, there is no discrepancy to speak of. In all cases, sunglasses are recommended for outdoor vision, anyway.
One of the goals for this procedure was to enable me to use $5 supermarket reading glasses, and not be bound to special prescription lenses. In fact, that is the outcome I’ve obtained.
There is a handy formula or hack that my surgeon disclosed. The dioptre measurement on the supermarket readers is equivalent to the inverse of the focal distance at which they work best (assuming you’re perfect at infinity, which I am now). So a +1 dioptre lens will focus at 1/1 metres. A +3 dioptre lens will focus at 1/3 metres or 33 centimetres.
This then is a perfect result. I can (or have already) purchased many $5 glasses and left them where I need them, with the right focal length. +1.5 for the computer screen, +2.0 for reading, and +3.5 for electronics inspection. If my reading glasses get lost or old, who cares? In fact, after being used to wearing the same pair of glasses for several years, it is quite entertaining to be able to buy new glasses every week, just to have a fresh look!
To anyone who has lived with significant near or farsightedness throughout their life, just hope for the day that you can be diagnosed with the indications of cataract disease, and go with the replacement as soon as you can. There doesn’t seem to be a downside for doing this. But there are substantial upsides to achieve:
Yes, the shower floor is very grimy.
And yes, that’s a spider on the bedroom wall over there.
I’m going to be hanging on waiting for my upgrade to HUD, and integration with Alexis or Siri, so that I can finally remember names.
One Year Review
About 14 months following surgery, I was beginning to find that my vision from the right eye was reducing. Not at a particular distance, but rather generally. Fine detail both near and far was substantially degraded with respect to the vision in my left eye.
This was caused by cells growing on the inside surface of the new lens. As a foreign body my eye was trying to encapsulate it and scar over it. A thickening layer of new cells was being grown over the lens surface.
Fortunately, the cure for this is to undertake a capsulotomy, which removes the cells from the inside surface of the lens. The procedure takes only a few minutes and, in my case, is immediately effective in returning full vision.
I’d note that my surgeon did note that he has stopped using the PhysIOL MicroPure IOL since my operations, as cell regrowth happens far too often and is far more prevalent than with the Zeiss lens in my left eye. It would have been nice not to have been an experiment.
The colour perception effect remains, but is only noticeable when I’m looking for it. So absolutely not a problem.
Now five years in, I’m still super happy with the result. My vision has stabilised down to +0 diopters in both eyes, with +0.25 diopters of astigmatism (i.e. the smallest possible prescription). I think that my eyes are slightly dryer than before the operation, but that may be that I’m just more aware now or dry eyes is part of ageing.