I’ve pitched a camp to test that I’ve got all the required things. The lists are great, but they don’t necessarily match completely with reality.
It is still a few months before my vehicle and pod trailer are going to be available, so my old station wagon is serving as the test mule. I loaded everything in and transported it to the camp. It took two trips as I’m minding an extra dog, and have to allow space to transport dogs rather than equipment.
Of course the weather doesn’t play nice, and it is raining continuously as I try to set up the camp. And, since I’ve pitched camp about 80m from the closest approach, everything has to be carried in through the rain.
The first job is to work out where the table, and cooker work best. I’ve decided to put the table into the middle of the gazebo space, and then use one side for cooking, and later the other side for sitting / reading / writing. Putting the kitchen in the far corner allows me to empty water, coffee dregs, etc onto the grass, and keeps the inescapable ants away from the tent.
Very happy with the old Primus stove. I’ve inherited many pieces of my kit, including the tent, cooker, and quite a few tools and whatnot. So it is the first time that the 1970’s stove has been used in a long while.
Similarly, my Terka Tent from Czechoslovakia also looks to be performing well. It hasn’t been out of the bag since the early 1980s. Yet, after standing in the rain all day, it still looks to be fine inside. But time will tell.
As I’m minding an extra dog I had to build a bivouac for her. Using a cheap tarp found in the local supermarket pegged out with 1/3 under and 2/3 over, a piece of 10mm insulation board to get her off the ground, and her own blankets from home, I was able to make her a dry and warm place to sleep.
I’d forgotten how slow it is to get things done. In a house there’s hot water on tap, and boiling water from an electric kettle, and food straight out of the refrigerator. Living in the outdoors, things are not so time efficient. To make a coffee requires preparation, and cleaning up. A choice has to be made as to whether to clean up first, and have cold coffee, or to enjoy the coffee, but then have the chore of cleaning up after the relaxation of consuming the coffee. Similarly, preparing toast, making tea, or any other process around the kitchen needs extra steps that either add time, or reduce the “enjoyment” value of the activity.
Over the weekend the weather got better and better, and once things had dried out the ease of doing everything increases. Fortunately, as it wasn’t too windy, the gazebo provides 9m2 of living space protected from the rain. So, it seems that at least in still weather the gazebo is a valid alternative to a vehicle awning.
Proving to myself that there is no real benefit to a vehicle awning is a big thing. Vehicle awnings are expensive, are heavy high on the vehicle, and they increase fuel consumption because they rely on having a roof rack to mount them. They also force you to limpet yourself to the side of your muddy or dusty vehicle and they need to be closed before the vehicle can be moved.
I used some extra guy ropes looped through the gazebo frame and anchored to the ground, to hold it very stable. The gazebo roof is only held onto the frame with Velcro. Should a strong wind gust take the nylon roof off, the frame will remain tied down. A piece of nylon material floating or flapping about won’t do any damage, and will probably come back down to earth quickly as it has no form holding the wind. Alternatively, attempting to hold a 9m2 “sail” to the ground in the face of a strong wind gust is a pointless endeavour.
I am pretty happy with the kitchen set up. The stove, and substitute Engel refrigerator are easy to use, and effective. I purchased some wicker (plastic) boxes, which I used to store 1. Food, 2. Crockery, 3. Pans and Utensils, and 4. Cleaning and Utility products. These are much easier to use than plastic shopping bags (which would have been the temporary alternative), yet they’re not perfect. The issue is juggling them from the ground (in the rain), to the 5’ Lifetime Table or the top of the Engel. If I build a kitchen storage in the back of the Rubicon, then that may solve the juggling of containers, but it may create an issue where things are hard to access. Another alternative is to organise the boxes by process or task or by frequency of access, rather than by topic as I’ve done now.
For the sleeping arrangements, I’ve got it generally right, but I’m still not totally happy. The combination of the stretcher and mattress makes for a very warm and comfortable bed. However, the way I’ve organised the YHA Bedsheet together with antique woollen blankets is uncomfortable. They slip around on the mattress, and become tangled quite easily. One very pleasing accident with the YHA Bedsheet is that the pillow slip serves to hold the pillow from slipping off the end of the bed. As the stretcher has no “head” this can happen easily, so it is very useful that the pillow is retained by the sheets.
As I’ve prepared the bedding into a rolled swag, with an 6’x8’ tarpaulin outside the self-inflating mattress, sheets, and blankets, the idea of storing everything together has worked well. The plan is that having all the bedding rolled together will allow the bed to be dropped anywhere and used, and stay dry until unrolled even if it is carried in the rain.
Because the Engel refrigerator is very heavy, as an exception I used a handcart to move it from the vehicle to the camp site. And, because I had it with me I was also able to use it to move the 12V battery, 20l water containers, and other heavy items. A handcart is not something that I’d previously considered taking along, but now I think if there’s space then it will come with me. It will be useful for moving water, firewood, and many other backbreaking tasks.
I didn’t bring a broom. I should have. A hand brush is useful to sweep standing water, meat ants, and leaves off the ground tarpaulin, and to clean the tent and generally around the site. But I think that a broom would work better and be much easier on the back.
I’ve taken some pictures of the things that worked, to remind me what comes with me.
Stadium Seats for chairs are very comfortable, and weigh nothing. They can be used anywhere. The handcart is a very useful addition. Not sure about collapsing water containers. Sure they’re easy to store and keep handy, but they’re not easy to use when half full. So, it is best to transfer their contents into a “working” container. The kettle was used more often than any other utensil. Hot water is essential.
And finally, the Australian bush just loves to come home with you. This “little” guy was hitching a ride in the cutlery container. He joined with the meat ants roaming around the campsite, and mosquitos swarming everywhere, making this camp a 3 of 6 wildlife expedition. Fortunately no scorpions, or snakes. And no centipedes.
From the previous discussion on Curb Weights I’ve selected a method to transport sufficient water, fuel, supplies, and equipment to travel in the bush. But, I also need to examine how best to sleep given the impact of common environmental influences in the bush.
There are a number of sleeping systems provided by the vehicle and system for going bush. The below images try to indicate these in context.
I’ve found many options are possible. Six major options are indicated above, related to the previous discussion on curb weights, and they are discussed following some basic environmental considerations.
Roof Top Tent
The environment of Australia plays the major role in determining the best sleeping system. Let’s talk about the environmental aspects of sleeping.
There are no bears, lions, or other large predatory mammals in Australia. The wildlife is much smaller, more intimate, and loves nothing more than a warm bed to snuggle into during the cold desert nights.
Keeping the local wildlife away out of your bed, and out of your boots, is an important consideration for where your bed should be.
The Dry Creek
When selecting a flat smooth camp site it is easy to believe that a dry creek bed is a good place to spend the night. Unfortunately bush creeks are subject to flash flooding, and the storm which causes the flood may be a great distance away. It may not even rain where you are.
While the flood waters will not be as extreme as shown here (Todd River, Alice Springs, Northern Territory), it is very possible that a creek bed campsite can become subject to anything from a trickle of water to an active flowing creek during the night.
The Gibber Plain
The term “gibber plain” is used to describe desert pavement in Australia. It is a desert surface covered with closely packed, interlocking angular or rounded rock fragments of pebble and cobble size. Desert varnish can collect on the exposed surface rocks over time.
Gibber is located across of much of central Australia, and in the desert you’ve a choice of sand dunes, bull dust, or Gibber plains on which to make a camp site. So, remember to take a rake.
Ok, now we’ve got some of the environmental preamble out of the way, let’s look at the options for sleeping comfortably.
The swag or bed-roll is the traditional method of rough camping in the Australian bush. It is basically a canvas tarpaulin with a mattress inside. They’re quite warm and they’re also waterproof. It zips all the way up so it covers your head, and you have the canvas for a rain cover. The swag is useful for any of the options for carrying the equipment, and can be used in good conditions even when other alternatives are available.
The swag is the choice of bedding for a swagman. A swagman was a transient labourer who travelled by foot from farm to farm carrying his belongings in a swag. The term originated in Australia in the 19th century and was later used in New Zealand.
Swagmen were particularly common in Australia during times of economic uncertainty, such as the 1890s and the Great Depression of the 1930s. Many unemployed men travelled the rural areas of Australia on foot, their few meagre possessions rolled up and carried in their swag. Their swag was frequently referred to as “Matilda”, hence the song Waltzing Matilda, based on Banjo Paterson’s poem, refers to walking with their swag. Typically, swagmen would seek work in farms and towns they travelled through, and in many cases the farmers, if no permanent work was available, would provide food and shelter in return for some menial task.
A swag is quite the romantic holiday experience for backpackers, but it is not going to protect the quality of your sleep from gibber plain cobbles, local wildlife, or flash flooding. So its use case is limited to good conditions, and it shouldn’t be relied upon in bad conditions (if you want quality sleep). Score here is 0 from 3 environmental points.
Tents are the next option to discuss. They can protect you from local wildlife, provided they are always properly closed up and are in good condition. They will protect you from bad weather and some water flow. But inside the ground will still be rocky and uneven if camped on gibber cobbles or other rock formations. So let’s say the tent scores 1.5 from 3 environmental points.
Roof Top Tent
The Roof Top Tent would seem to address all of the failings of a normal tent. Placing the tent up high, with a flat bed and mattress, removes all of the environmental issues and scores maximum 3 points.
However, the RTT has a fatal flaw. Every time you go up to bed, or get out of bed, you have to climb a 2m high ladder. This is not an issue 99 times from 100 times, but if you’re sleeping in it for a year the chances are you’re going to slip at least 3 or 4 times, and one of those times (in compliance with Murphy’s Law), you’ll break your leg, and that fall will occur in the middle of the Simpson Desert.
From that safety issue alone the Roof Top Tent has a complete veto, in my opinion.
The stretcher bed, cot, or camp bed is an option to uplift quality of sleep in a tent, under a swag, or anywhere there is no full mattress available. The stretcher bed adds environmental points to the tent and also to the swag by lifting you above gibber and minor water, and protecting against most of the wildlife issues (except mosquitos).
The Experts agree that if there’s space available they would never go bush without a stretcher bed. The combination of tent and stretcher bed scores the full 3 environmental points.
While the trailer, and specifically the pod trailer, is not designed for sleeping, it can be used as an alternative sleeping platform in the case of bad weather, when the gibber or bull dust is too thick to sleep on the ground, or when there is no need to set up a tent.
Pod trailers can be optioned up to become a full soft-floor camper, but that is not the intention of this discussion. The goal is simply to point out that, as an alternative, the bed of a pod or box trailer can be used as a base for a swag or bed roll, instead of using a stretcher bed, and it scores 3 environmental points for this purpose.
Sleeping indoors, while in the great outdoors, is the epitome of comfort. Having a clean, dust proof, wildlife proof haven at the end of the day will provide the best possible sleep quality. But, of course this does come at some expense, and the issues covered in the Curb Weight discussion apply. Scores 3 environmental points.
Going into the Australian Bush requires the intrepid traveller to carry substantial supplies of water and fuel, as well as the normal requirements for living off-grid for extended periods. This is quite different to the norm in international overlanding where fuel and water is usually available in small towns, and is due to the very (incredibly) low population density of the Australian Bush. The population density of the Australia’s Northern Territory is 0.16 people/km2, about 1/100th of the density of Argentina with 17 people/km2, or 1/25th of Botswana with 4 people/km2, for example. So the purpose of my lists is to estimate the weight required to travel with some comfort across long desolate desert tracks, before the required fuel and water supplies are added.
Whilst my lists remain incomplete they are a useful tool to establish the the total equipment and supplies budget, and then contemplate the best method to carry everything. My current calculation shows that the normal total mass estimate is around of 575kg, including fuel and water. A rough breakdown of the categories is below.
Recovery Equipment – 50kg
Vehicle Spares / Consumables – 20kg
Tools – 30kg
Camping Equipment / Tents / Tarps – 90kg
Battery & Electrical – 50kg
Refrigerator / Slide – 45kg
Cooking Utensils – 20kg
Computers / IT / Camera – 20kg
Clothes / Blankets / Linen – 30kg
Food – 30kg
Unclassified / Toys – 20kg
Adding to the items above, is necessary to carry water sufficient for 20 days. And fuel to bridge the longer distances between services.
Now that might look like I’ve budgeted to carry a lot of stuff, but the idea is not to load up to 100% capacity before departure. But rather the calculation is intended to to allow room for growth as over time, as stuff tends to accumulate, and trophies and memorabilia will take up their share of space too. Nobody likes to climb into a vehicle and have their stuff fall out on the road because everything is packed to the roof.
So, I’m going to estimate that a total payload budget of 600kg will be sufficient. How can that payload be effectively carried across sand and rock over thousands of kilometers?
Carrying the Payload
As a starting point, the Jeep Wrangler JL Rubicon 2 door is the chosen vehicle for going bush.
From the 2020 Wrangler Specification, the Rubicon can carry a maximum payload of 1322lbs, or 600kg, in the 4 door version. The 2 door version has similar mechanical specifications but weighs about 100kg less, but I will assume that it can’t carry a greater payload than the larger 4 door version. Maximum braked towing capacity for the 2 door version is 1497kg. Let’s have a look at some of the options for carrying 600kg with a 2 door Rubicon.
In the above images I’ve considered some alternative solutions for carrying 600kg payload (in green), and the maximum usable axle load (in red) for the vehicle. My alternatives include:
Bare Vehicle – everything inside the vehicle
Roof Rack – 450kg in vehicle, 150kg on roof (and outside)
Roof Top Tent – 500kg in vehicle, 100kg of RTT (and outside)
Box Trailer – 200kg in vehicle, 400kg in trailer
Pod Trailer – 200kg in vehicle, 400kg in trailer
Teardrop Camper – 200kg in vehicle, 400kg in camper
The Wrangler 2 door is a very small vehicle and, although it is probably the most capable 4WD available “off the showroom floor”, loading it up to the maximum payload will make a very uncomfortable origami that would need to be unfolded at each camp and then intricately repacked each morning. Additionally, as the maximum rear axle payload is about 1000lbs, or 450kg, the available payload would be limited to less than the maximum vehicle payload, as there would be no way to share the weight to the front axle.
In my opinion only way to carry 600kg on a Rubicon is to distribute the weight onto both axles by using a Roof Rack.
By adding a roof rack, and possibly also side racks for fuel and a rear rack for tools and fuel, it is possible to distribute the weight onto both axles, and also increase the load volume of the Rubicon sufficiently to reasonably store the maximum vehicle payload.
The cost for roof rack system consists of a base rack of around A$1,000, guard rails at around A$500, and then vehicle specific mounting kits from around A$400. Accessories to mount shovels, high lift jacks, jerry cans, or gas bottles can be added for around A$200 per item.
Adding a roof rack will increase the loading on the front axle and especially the rear axle up to the maximum design rating of 3100lbs, or 1400kg, and will increase the tire load affecting both sand driving ability and the tire wear characteristics. Axles, wheels, and tires will be running at maximum load constantly. Adding a lift-kit to balance out the spring compression will not resolve this loading issue, and it is likely that the vehicle may end up being over-weight from a legal (insurance) perspective.
Using a roof rack will also significantly impact the dynamics of the vehicle. Adding up to 100kg onto the roof and 50kg to the outside of the vehicle will increase the overall pitch and roll as the track pushes the vehicle around. It will be very uncomfortable, and may actually become unsafe as the maximum approach and departure angles are reached.
On road, which will be the majority of the kilometres travelled, fuel economy can suffer by 10% and up to 25% according to some reports. Where tens of thousands of kilometres are at stake, and fuel is both in limited supply and expensive, it is best not to use a roof rack if there are better alternatives.
Roof Top Tent
The roof top tent suffers from the same dynamics and fuel economy issues as the roof rack, and it is also of very limited application being purely a place to sleep. If a roof top tent is fitted then the top of the vehicle can no longer be used for storing equipment.
A roof top tent costs around A$,5000, but considerably more can be spent if desired.
With the issues associated with roof top tents being the same as with roof racks and offering no other advantages, it is better to seek alternatives.
Many people have realised the benefits of an additional load carrying axle when travelling around Australia. The typical steel box trailer in the standard 6’x4′ or 7’x5′ single axle configurations lives in most suburban back yards, and has been making the journey to the summer camping holiday since forever. It has become more common recently to add off-road suspension and hitch components to make the box trailer capable of serious expeditions.
The typical suburban box trailer costs around A$1,500, but the vehicle must have a trailer hitch which can cost up to A$2,000 to install, depending on the vehicle. An off-road trailer with uprated suspension and chassis typically starts around A$5,000, but specialist camper trailers can be substantially more. Some fully fitted off-road box trailers cost upwards of A$60,000.
The design and registration of box trailers typically focusses on a gross vehicle mass (GVM) of 750kg and their Tare is typically 250kg, in the best case, leaving a payload capability of 500kg. If our total load can be distributed between vehicle and trailer then we can load the trailer with 400kg, leaving a margin of 20% remaining, and reduce the vehicle load to 200kg.
The load in a trailer is carried with a low centre of mass, so that the dynamics of the tow vehicle are not affected, and having the additional loaded trailer axle reduces the wear on the vehicle axles, wheels and tires.
However, towing a box trailer does not come for free. There is an increase in fuel consumption to be expected from towing. Depending on the size of the load carried and the amount of wind drag created by the trailer, the increase in fuel consumption may be up to 10%. This is significant, but it is much less than if a similar load were on a roof rack. And, as we now have a greater free load capacity it is possible to carry up to 100l of extra fuel as needed.
An important advantage to using a trailer is that it can be disconnected from the vehicle and left behind at a camp site, or trail head, when its contents are not needed. Through this method most of the payload associated with living does not need to accompany the vehicle on a difficult 4WD trail. This minimises the chances of breakage or damage to the payload.
The box trailer has several disadvantages. Firstly, the load is carried open and unsecured, and secondly, the payload is subject to dust and sand from both the vehicle rear wheels and the environment generally. Whilst Australia is generally safe, for piece of mind, it is best to keep valuables and equipment hidden out of sight when the trailer is left behind. So box trailer loads are usually covered by a tarpaulin or load cover. This adds to the soft security of the load, and helps to prevent dust and sand ingress, but it is time consuming to wrap and tie down the load each morning.
There are many advantages to using a simple box trailer to carry the payload, but it would be more ideal if the box trailer load could be covered by a solid lockable lid to secure the load and mitigate dust and dirt ingress.
Recently advances in plastics technology have enabled the creation of large roto-molded polyethylene structures, and companies have started to produce off-road “pod” trailers using polyethylene tubs and lids jointed like a clam shell and sealed with a gasket to produce an effective dust seal.
Typically these pod trailers incorporate all of the advantages of the box trailer, adding in the tare weight saving of a dust resistant plastic tub and sealed lid, and the aerodynamic efficiency of a smooth load top.
Many pod trailers can carry a payload of 750kg to 810kg, with their GVM being 1250kg with trailer brakes. An extreme off-road pod trailer can cost from A$13,000, and customisation and options can be added to increase the suitability for long distance expeditions.
With an appropriate off-road independent suspension, hitch, and trailer brakes, a pod trailer can follow behind a vehicle on all but the most difficult 4WD tracks. And where necessary the secure lockable pod can be left behind at a camp site or trail head.
Moving up from the box trailer or pod trailer solution, it is possible to consider a teardrop or square drop camper. The key advantage of the camper is that the question of sleeping arrangements is answered by a permanently made bed. At the end of a day, or when weather is bad there is a lot to be said for a ready-made bed.
A teardrop camper is usually a significant Tare approaching 900kg, and they can usually carry at least 400kg and up to 800kg in payload. They can easily accommodate the 400kg we need to carry. However the camper GVM will certainly be approaching 1,900kg when fully loaded. This is about 1 Tonne more than a box or pod trailer.
Teardrop campers range in price from A$50,000 and up to around A$100,000, making them potentially more expensive than the tow vehicle.
Besides the large GVM of the teardrop camper, there is a cost to transport the volume for a bed and “sleeping space” around the country. The cost comes in increased the form of increased drag and increased fuel consumption from the larger box, and in reduced space to store camping equipment, unless the potentially dirty equipment is transported on top of the clean made bed.
Conclusion and Decision
Following on from the discussion above, I have decided to go with the pod trailer solution. Although using a trailer will close off some of the more extreme trails and options, such as parts of the CSR, the flexibility to leave the pod and equipment safely behind at the campsite, and have the small tow vehicle remain relatively unmodified (no heavy duty springs, or body lifts, etc), together with the other points discussed above, make the pod trailer the best value for money.
The pod trailer has some further advantages that I’ll discuss in a post on Sleeping Arrangements, and also further in Redundant System Design.
Really, I should be doing better than this. With a background in Agile Methodologies and Waterfall Project Management, using lists is positively Neanderthal in comparison. Yet, here I am. To get things done, and to remember what needs to be planned and done , I’m writing lists.
About a month ago I asked a good friend whether he’d be interested to join me in the bush for a while, since it had become obvious the only other interested co-traveller was our family dog. His response was, “Would I like to have a copy of a mutual friend’s 23 camping lists?” To which I just laughed. I mean 23 lists… come on. How many lists to you need to leave the house for a few weeks?
I didn’t think about lists for a while. But then after consuming another 20 hours of YouTube suggestions and recommendations for overlanding or international expeditions, I could no longer hold all of the thoughts and ideas in my head. And then I realised that this mutual friend has the right idea. Put it on a list and then it is managed. Putting it on a list doesn’t get it done, but it does get it reviewed every time the list is examined.
One month into this, I’ve established 11 lists for going bush. At this stage I’ve written no lists of destinations or activities, but rather focussed entirely on what I’ll need to make sure that being out bush won’t be life threating, and will be mainly enjoyable.
So here’s my “TOP 11” List of Lists for going bush.
Vehicle – accessories and upgrades
Recovery – how to recover from vehicular stupidity
Tools – fixing things that fail
Spares – consumable items for vehicles
Camp Equipment – portable lifestyle
Cooking Utensils – to eat healthily
Camp Consumables- not quite food, but related
IT / Photography – toys related to bush activities
Planning during the past two years has been impossible. Worldwide, for everyone, so many things have changed.
I had plans, but I guess they’ve changed too. I no longer possess a licence for free movement, so leaving the country has become impossible. In the interim I’ve decided to go bush, at least until the restrictions on free movement are relaxed, and probably longer.
The Australian term “to go bush” has various definitions, including “to abandon amenity, and live rough” or “to live a simpler or more rural lifestyle”. Alternatively it can be seen as “going into hiding, to avoid authorities”.
So these will be my notes on going bush. Since I’m not photogenic, vlogs are out and the written word will have to suffice. I hope to cover my motivation for decisions on timing, routes, and gear. And when the adventure finally begins, and I am truly “gone bush”, updates on activities should come regularly wherever the Internet exists.
Just over 4 years ago, on 18th March 2018, I committed the first CP/M-IDE files into the RC2014 repository. Now that some time has passed and it has developed into a stable solution for CP/M I think it is time to fill in some details about why it was written, how it differs from other CP/M implementations, and how to reproduce images to match those in the CP/M-IDE repository.
There are several implementations of CP/M available for the RC2014. Initially, the CP/M on breadboard implemented by Grant Searle became the default implementation for the Z80 RC2014. Slightly later Wayne Warthen added support for the RC2014 to the Z80/Z180 RomWBW System. RomWBW is a very extensive and advanced set of system software, supporting many different RetroBrew machines, and in general it requires 512kB ROM and 512kB RAM to reach its full potential.
Each of these implementations has its own focus. The 9 Chip CP/M is based on simplicity, and being able to be built on a breadboard with the minimum of complexity, but it uses an occasionally unreliable 8-bit CF Card interface. RomWBW supports a variety of hardware including Z180 CPUs, and provides an underlying generalised architecture support which provides paged memory and many facilities but this imposes a processing overhead on I/O, and requires substantially more RAM than a typical CP/M system.
Faced with both these options, and being very interested to build my own solution, and to use my growing experiences supporting the z88dk community, I decided to build CP/M-IDE to fulfil a specific niche.
The CP/M-IDE is designed to provide support for CP/M on Z80 while using a normal FATFS formatted PATA or IDE drive. And further, to do so with the minimum of cards, complexity, and expense. Most recently, it has also become the CP/M which supports the 8085 CPU Module.
Initially, I chose the IDE Hard Drive Module specifically because I could use it to attach any old hard drives, aka “spinning rust” to my RC2014, and this led to support for everything from these old 5 1/4″ hard drives, through to modern SSD or DOM solid state drives. It also supports both old and modern Compact Flash Cards in their native 16-bit mode, so readily available 1 and 2 GigaByte Compact Flash cards are OK. It is also possible to use SD Card to CF Card adapters with the IDE Hard Drive Module, allowing direct high performance support of modern pluggable storage.
CP/M is a very compact Operating System and, in the most common version 2.2, it supports only serial interfaces and disk interfaces. For the RC2014 there are two standard serial Modules, being the ACIA Module and the more advanced and expensive SIO/2 Module.
As I’m quite interested in building real-time and event driven systems, in contrast to other CP/M implementations, CP/M-IDE therefore includes drivers supporting both transmit and receive interrupt based solutions, sourced from my z88dk RC2014 support package for the ACIA serial interface and the SIO/2 serial interface.
8085 CPU Module
More recently I have built a 8085 CPU Module for the RC2014 System. This is the first time that an 8085 CPU has been integrated into the RC2014 System, and it is able to work with the Z80 bus signalling required to drive the standard RC2014 Modules.
The concept remains to use the minimum of additional hardware over the entry level RC2014 Pro model. In fact just the IDE Hard Drive Module is necessary.
IDE Hard Drive Interface
The IDE Hard Drive Module is based on the 8255 PPI device. This device was designed to work with the 8085 CPU and 8086 CPU. It is perfectly suited to supporting a 16-bit parallel IDE interface as it provides latching of signals on 3 separate 8-bit ports.
Initially I was concerned that the selection of control signal pins for the IDE interface limited the possibility for use of the 8255 device for generalised I/O. I still think that this is an issue but, since no one has implemented further generalised solutions, the point is moot.
The IDE interface (or also termed diskio) is optimised for performance and can achieve over 110kB/s throughput using the FatFS library in C. It does this by minimising error management and streamlining read and write routines. The assumption is that modern IDE drives have their own error management and if there are errors from the IDE interface, then there are bigger issues at stake.
The IDE Hard Drive Module supports PATA hard drives of all types (including SSD IDE and DOM storage) and Compact Flash Cards and SD Card Adapters in native 16-bit PATA mode with buffered I/O being provided by the 82C55 device.
In the ACIA builds, the receive interface has a 255 byte software buffer, together with an optimised buffer management supporting the 68C50 ACIA receive double buffer. The choice of memory size for the receive buffer is based on optimisations available by having the buffer a full “page”. Also text can be “pasted” in fairly large chunks into the CP/M command line without losing bytes.
Hardware (RTS) flow control of the ACIA is provided. The ACIA transmit interface is also buffered, with direct cut-through when the 31 byte software buffer is empty, to ensure that the CPU is not held in wait state during serial transmission. The size of the transmit interface buffer is based on free memory within the CP/M BIOS. As BIOS memory is typically reserved to start on the 256 Byte page boundary, if an update needed to consume more RAM, I would reduce the size of the transmit buffer to avoid the need to consume an additional page of BIOS memory.
In the SIO/2 build, both ports are enabled. Both ports have a 255 byte software receive buffer supporting the SIO/2 receive quad hardware buffer, and a 15 byte software transmit buffer. The transmit function has direct cut-through when the software buffer is empty. Hardware (RTS) flow control of the SIO/2 is provided. Full IM2 interrupt vector steering is implemented.
As both ACIA and SIO/2 devices have a hardware buffer for received bytes, it is important for the receiving interrupt handler to drain these buffers completely before returning execution to the program. If this is not done there is a danger that received bytes could be overrun and lost.
For the CP/M-IDE 8085 build the Serial Output (SOD) is enabled as the CP/M LPT: interface. This is activated by using ^p as per normal practice.
Whilst there is no support for additional hardware within CP/M itself (as there are no BDOS calls standardised), it is possible to use additional hardware in CP/M applications. Typical hardware options include the APU Module, various Sound Modules, and digital I/O Module.
There are many descriptions of Digital Research CP/M, so I won’t go into detail. It is important to know that CP/M v2.2 was in its day the most widely deployed Operating System for small computers based on the 8080, 8085, and Z80 CPUs. Later versions of CP/M supported the 8086, and 68000 CPUs, as well as providing many more system functions than the CP/M v2.2.
Whilst there have been later versions of CP/M produced, to my knowledge, there were no widely available user applications produced which could not be run on CP/M v2.2. This broad compatibility is why CP/M v2.2 is important.
CP/M v2.2 is essentially just 4 pieces of code. The BIOS (Basic Input Output System) is provided to abstract the hardware devices from the operating system. Essentially there is a limited set of BIOS commands that the BDOS can call on. These BIOS commands are implemented specifically for the characteristics each machine, and in the early days of computing it was essential that a user knew how to write their own BIOS.
The second piece of code is the Page 0 of memory, which is written by the BIOS cold boot command on initialisation. The role of this Page 0 is to provide important addresses (for both BIOS and BDOS) and to set important status registers like the I/O Byte. The Page 0 is also used to manage the 8080, 8085, and Z80 CPU interrupt vectors, and to store the command line entered by the user when an application is initialised.
The CP/M BDOS is the middle layer of the Operating System. Application programs rely on BDOS system calls to support their requirements. Here the drives (A:, B:, through to maximally P:) are opened and closed, and disk sectors are written. The BDOS does its work by calling BIOS commands on behalf of the application that is currently loaded.
Often the BDOS is combined with the CCP (Console Command Processor) in one assembly language file because both of these components are constant and they are independent of the hardware. This is essentially the distribution of Digital Research CP/M provided to the user.
The CCP is the user interface for CP/M. It provides a very small number of integrated commands, like “erase”, “rename”, “type” or “exit”, but its main role is to load additional commands or applications called “Transient Programs” into RAM and execute them. Often, an application loaded into the Transient Program Area (TPA) RAM will overwrite the CCP in memory as it is normal for the CCP (and BDOS) to be reloaded once an application quits.
There are third-party alternatives available for both the CCP and BDOS, and as these are loaded each time the computer is restarted it is possible to replace the default versions by alternatives if desired. Specifically for CP/M-IDE the DRI CCP can be replaced by Microshell SH (here), or both CCP and BDOS can be replaced by NZCOM without impacting the installed default system.
CP/M was developed before there was a standard implemented for computer disk drives, and every system had its own peculiarities. In order to cope with this situation each BIOS had to be written to cover the possibilities, by completing a Disk Parameter Block. Each disk type needs its own DPB, which takes space in BIOS RAM, so it makes sense for CP/M-IDE to be implemented with only one type of disk supported. Additionally each drive attached by the BIOS requires a substantial Allocation Vector RAM reservation. It needs to be said that providing for unused drives in CP/M substantially increases the BIOS size, and commensurately reduces the TPA RAM available for user applications and in turn their working RAM.
A subtle but important advantage to using only one disk type is that every disk is orthogonal, and it can be located anywhere (beginning at any LBA) on the underlying physical disk. Also, it does not matter into which CP/M drive A:, B:, C:, or D: a disk is loaded to when booting. The CP/M system disk looks exactly like any other disk, and every CP/M disk file can be located anywhere on the FATFS parent drive.
Further, the CP/M-IDE CCP/BDOS/BIOS operating system binaries are loaded from ROM. This is not typical, as most CP/M BIOS implementations will load the CCP/BDOS/BIOS from the first sectors (or tracks) of the attached physical drive, and will require the system disk to be located in specific sectors of the hardware drive, and will rely on a specific allocation of LBA addressed sectors (or slices) for all additional drives.
The CP/M-IDE system supports a maximum of 4 active drives of nominally 8 MByte each. The maximum size of a CP/M disk is 8 MByte, so we have maximised the size of each disk. Further each CP/M disk can support up to 2048 files as a maximum. By setting the standard CP/M-IDE disk type to be maximised both in terms of size and number of supported files there is no question of things being too small. The only limitation introduced is that up to 4 CP/M drives can be active at any one time, which leaves us with the maximum free TPA RAM. The choice of 4 drives for CP/M-IDE was based on nominally having 1 drive for system files, 1 drive for application files, 1 drive for user data or source files, and 1 drive for temporary files. In practice I’ve found that working with 2 or 3 drives is the most common scenario.
As CP/M-IDE uses LBA addressing there can be as many CP/M disks stored on the IDE FAT32 (or FAT16) formatted disk as desired, and CP/M-IDE can be started with any 4 of them in any drive. Note that CP/M does not know about or care about the FAT file system. On launch CP/M-IDE is provided with an initialisation LBA for each of its 4 drives by the shell, and all future sector references to the disk (file) are calculated from these initial LBAs provided for each drive.
As the FAT32 format supports over 65,000 files in the root directory, and a similar number of files in each sub-directory, collections of hundreds or even thousands of CP/M disks can be stored in any number of sub-directories on the FAT32 parent disk. Knock yourself out by storing every conceivable CP/M application on thousands of disks on a single 120 GByte drive. As the CP/M Operating System doesn’t store state (the CCP/BDOS is reloaded each time an application terminates), changing or reordering drives is as simple as typing exit, and then restarting with the new drives desired using following shell command: cpm filefor.A filefor.B filefor.C filefor.D.
As we can store literally thousands of CP/M disks on one FAT32 parent disk, let’s think about how to create CP/M disks, and how to store information on them. There are two main methods for building CP/M disks, being from within CP/M using native tools, and alternatively from a Linux or Windows PC host with the physical FAT32 disk temporarily attached to the host. For creating and building many CP/M disks the second host based method will be faster and more convenient.
Building CP/M disks from a PC host relies on the use of the CP/M Tools software utilities package. cpmtools utilities can be used to copy executable CP/M files from your host PC, where you have downloaded them, into the CP/M disk found on your FAT32 disk.
As CP/M-IDE uses a “non-retro-standard” disk definition, cpmtools lacks the required definition in the standard distribution. The disk definition for 8MByte CP/M-IDE disks is provided below. In Linux based systems this disk definition should be added to the host’s /etc/cpmtools/diskdefs file.
On Windows PCs, as of cpmtools 2.20, creation of a new disk does not fully extend the CP/M disk out to the full 8388608 Bytes of a fully sized CP/M disk. This means that as files are added to the CP/M disk it is possible that the host PC operating system may potentially fragment the disk as it grows it. This would be bad, as offsets are calculated from the initial file LBA and therefor the CP/M-IDE system has no way to recognise fragmented CP/M disks. Therefore, for safety, a template CP/M disk file has been provided which can be stored onto the parent disk and then copied and renamed as often as desired.
Typical usage to check the status of a CP/M disk a.cpm, list the contents, and then copy a file (e.g. bbcbasic.com) from the host to the CP/M disk, is shown below.
Building a CP/M System disk is a personal choice. There are multiple utilities and applications available, and not all of them will be relevant to your own needs. However, to get started, the contents of the RunCPM system disk can be used. An extended version can be found here.
Also, the NGS Microshell can be very useful, so it has been added to the example system disk too. There is no need to replace the default DRI CCP with Microshell. In fact, replacing it permanently would remove the special EXIT function built into the DRI CCP to return to the shell.
Of these applications above, the Hi-Tech C v3.09 suite continues to be updated and maintained by Tony Nicholson. Therefore it is useful to update the HITECHC.CPM.zip CP/M disk with the current release files.
Building CP/M Software from Source
CP/M-IDE is quite unusual in that it is built with a unix like shell as the system loader. From the shell the CP/M system is started, but it is also possible to use the shell to read the FAT file system and provide directory listings, to print memory and disk sector contents, and to provide status for the attached drive. Other versions of CP/M for Z180 have file system write capability included, but due to the limited capacity (32kB ROM) of the RC2014 these additional file management functions had to be omitted.
The chicken or the egg? In this case the z88dk is both the starting point CP/M-IDE and the finishing point for developing CP/M-IDE applications.
With both of these settings adjusted the RC2014 libraries need to be rebuilt. The sure way to do this is by a full rebuild of z88dk, as both 8085 and Z80 libraries will be touched. it is done with the ./build.sh -c command from the root directory of z88dk. There are other alternatives, such as deleting the libraries that will have to be changed and executing the ./build.sh command.
As well as two compilers, a macro assembler, and a large variety of useful tools, the z88dk is in essence a library of Z80 assembly language code covering all of the standard C requirements, and providing multiple options for implementing these libraries.
However, the z88dk doesn’t have C code libraries included. These are excluded because they can take too long to compile, and z88dk already takes quite a while to build as is. However the use of external libraries, and mainly C libraries is supported through the use of the z88dk-lib tool, which can import a compiled library and allow the linker to find it when a final binary application is being prepared.
For CP/M-IDE we need to have a high quality, reliable, fully functional FAT file system implementation. The most commonly used implementation is the ChaN FatFS. This code has been modified to work effectively with the Z80, and is provided in my z88dk-libraries.
For CP/M-IDE I have elected to use the SDCC compiler with the IY version of the libraries. For the CP/M-IDE 8085 the only option is to use the SCCZ80 compiler as it supports 8085 (and 8080) compilation.
As noted above, there is insufficient ROM available in the 32kB to support the full set of FAT file system functions, so we have to build a special version that is “read only”. There is a configuration that should be set to 1 to enable RC2014 read only in the file here. Then the library can be rebuilt with the following command lines.
The FAT file system libraries are now available for z88dk so we can move on to compiling CP/M-IDE
The source code available in the RC2014 Github repository for CP/M-IDE is kept up to date. There are three versions, tuned to suit the minimum hardware characteristics. There is no “auto identification” of additional hardware. This implementation of the CP/M operating system supports only IDE attached FAT formatted disks and 1 or 2 serial ports, so that is all that is necessary.
From the source directory of each version the command line identified here can be issued. The resulting .ihx file (renamed as .hex) can be compared with the provided HEX file. For interest it is worth compiling with the --list option, and studying the resultant assembly listings. This gives a good overview of the quality of code produced by the two compilers, and also the amount of space required to assemble the CP/M CCP/BDOS and BIOS components.
Now we have a functioning CP/M-IDE Intel HEX file, which can be written to EEPROM and tested.
New applications can be built using either the zcc +rc2014 -subtype=cpm or zcc +cpm for Z80 targets, or for the CP/M-IDE 8085 use zcc +cpm -clib=8085 to build applications. There are example applications to test with in the z88dk examples directory.
How does it work?
This is a description of CP/M-IDE 8085 specifically. The versions for the Z80 are quite similar, and so this can also be used as a reference for their operation. However as the RC2014 8085 support is unique in z88dk it is worth noting the specifics here.
The CP/M-IDE 8085 build is based on the rc2014 target and acia85 subtype within z88dk. The 8085 CPU starts execution at address 0x0000 from /RESET, therefore the target must write an effective Page 0 including a jump to the start of code, and interrupt and trap vectors, before the main() program for the CP/M-IDE shell can be started. z88dk uses the m4 macro preprocessor tool to expand included assembly code, and the configuration files for the acia85 subtype are found in config_8085.m4.
The overall initialisation process for the acia85 subtype is found in CRT 2 startup code for the RC2014. Each target in z88dk has multiple subtypes, and each of these subtypes has its own CRT startup code specification. These startup specifications are fully expanded and can be read most efficiently by using the --list option when compiling the system.
Before diving into the startup process it is worth considering how and where drivers for the rc2014 acia85 build are obtained. As the acia85 subtype is hybrid across newlib and classic libraries within z88dk it is worth noting that most of the drivers for acia85 are obtained from the device and driver directories within the rc2014 target. However, stdio drivers for acia85 and basic85 subtypes are found in the classic library in the rc2014/stdio directory.
Further, using the characteristics of linker preferences, if we chose to override the library drivers with our own versions found within the CP/M-IDE BIOS then the library versions will be ignored. And that is the case, where we provide the ACIA, 8255, and IDE drivers. This also means that before the main() function is started we need to copy these drivers to their correct location in RAM. This process is done by placing code in the code_crt_init section, as this code will be loaded and run prior to main() according to the memory model allocation.
Now we have our interrupt vectors completed, and the interrupt code placed with buffers initialised and ready to go. Our diskio and IDE drivers have been placed and now we can start our main shell user interface. Now we are parsing the command line using a shell system inspired by the example code by Stephen Brennan. Each of the commands implemented are self explanatory, and are mainly invoking one of the ChaN FAT file system functions. However the mkcpm command requires further description as this is the transition point from z88dk into DRI CP/M.
The mkcpm function is called with up to 4 arbitrary file names, representing the 4 CP/M disks. These file names are tested and, if all the files are found to exist, the base LBA of each file will be written to a specific location in cpm_dsk0_base, and processing will be handed over to the cpm_boot() function.
The _cpm_boot function is the CP/M cold boot mechanism. The CP/M cold boot will firstly toggle-out the lower 32kB of ROM to reveal a “clean” 32kB of RAM. At this point the 8085 interrupt and trap vector addresses must be written into Page 0 RAM, together with other important CP/M locations such as the I/O byte. Then control is passed to the rboot function to continue with the cold boot.
In the cboot process we should remember that the contents of the CCP/BDOS and the BIOS RAM have already been written to upper 32kB of RAM by the preamble code, so this process does not need to be repeated. This is different in the warm boot wboot process where we have to assume that the CP/M application or transient program will have overwritten the CCP and possibly also the BDOS, so we have to repeat the initialisation found in the preamble called by pboot.
As part of the cboot and wboot process, we check which CP/M disk is going to be used for our A: drive, by reading the LBA base, and then launching CP/M CCP shell by returning to the to the preamble code and falling through to _main.
From here it is all CP/M, and the usual operations apply.
This covers creation of software support for the 8085 CPU within the framework of the z88dk and also with MS BASIC 4.7. Specifically, the 8085 undocumented instructions will be covered, and some usage possibilities provided.
Future work is to build a re-entrant IEEE floating point library specifically using the stack relative instructions found in the 8085 undocumented instructions.
8085 Microsoft BASIC 4.7
The Microsoft BASIC 4.7 source code is available from the NASCOM machine. Although the NASCOM machine was a Z80 machine there were only minor changes to the original Microsoft BASIC 8080 code. Therefore it is an ideal source to use to build a 8085 based system.
Also a rc2014 target ROM subtype acia85 has been provided to allow on-the-metal embedded applications to be written. The full 32kB of ROM and 32kB RAM is then available, with the option to toggle out the ROM if needed for CP/M or similar systems.
The z88dk sccz80 C compiler is used for 8080, 8085 and Gameboy Z80 CPUs. This compiler is supported by the z88dk classic library. Over a few weeks, I reworked all of the sccz80 compiler support primitives (called l_ functions) to make them reentrant, and to optimise them for the respective CPU.
I’ve also reworked all of the z88dk string functions to support callee for the 8085 CPU. The callee calling mechanism is substantially faster than the standard calling convention. Also I’ve changed the loop mechanism for 8080 / 8085 / GBZ80 to use a faster mechanism. This consumes 5 bytes more for each function used, but reduces the loop overhead from 24 cycles per iteration to 14 cycles per iteration. Quite a substantial saving for extensively used functions like memcpy() and memset(), for example.
8085 Undocumented Instructions
Over the years since launch several very useful undocumented instructions designed into the 8085 have been found. These instructions are particularly useful for building stack relative code, such as required for high level languages or reentrant functions. However, perhaps because of corporate politics, these useful instructions were never announced, and thus were never widely implemented.
The z88dk-z80asm assembler provides synthetic instructions to simplify code for the different variants (it has also recently become a macro assembler) to simplify programming. These instructions are usually a useful sequence of normal instructions that can be issued with no side effects (eg. setting flags) that may streamline combined 8085 / z80 programming.
Discussion on the Instructions
Some things to think about (and then do).
Use the Underflow Indicator (K or UI) flag with 16 bit decrement and JP K, JP NK instructions to manage loops, like LDIR emulation, more cleanly. 16 bit decrement overflow flag K is set on -1, not on 0, so pre-decrement loop counter.
Use the LD DE,SP+n instruction with LD HL,(DE) to grab from and LD (DE),HL to store parameters on the stack. Can use this with a math library to make it reentrant, for example, and also relieves pressure on the small number of registers.
Use the LD DE,SP+n instruction with LD SP,HL to quickly set up the stack frame. For example LD HL,SP+n, DEC H, LD SP,HL to establish 256-n stack frame.
Use RL DE together with EX DE,HL to rotate 32 bit fields.
Use RL DE together with ADD HL,HL to shift 32 bit fields.
Use RL DE as ADD DE,DE to offset into tables and structures.
Use SUB HL,BC for 16 bit subtraction.
Remember EX (SP),HL provides another “16-bit register”, if SP+2 is the location of the return, and SP+4 is the location of first variable.
Learn how signed arithmetic can be improved using the K flag.
Since we know that the 8085 undocumented opcodes are available in every 8085 device they can be relied upon for any 8085 system. The challenge will be to take existing 8080 programs, such as Microsoft Basic and CP/M, and implement improvements using these 8085 specific instructions.
In reworking the z88dk sccz80 l_ primitives to make them reentrant and to optimise them for the 8085 CPU, I have found the LD DE,SP+n instruction very important. Using this instruction it is possible to use the stack as effectively as static variable storage locations. The alternative available on the 8080 (and Z80) LD HL,N , ADD HL,SP takes 21 cycles, and clears the Carry flag. With the few registers available on the 8080 losing the Carry flag to provide state causes further cycle expense, spared with the 8085 alternative.
To load a single stack byte using LD DE,SP+n , LD A,(DE) is only 4 cycles slower than loading a static byte using LD A,(**). Also, loading a stack word using LD DE,SP+n , LD HL,(DE) is only 4 cycles slower than loading a static word using LD HL,(**). Given that variables can be used in-situ from the stack or pushed onto the stack from registers rather than requiring the overhead of the value being previously loaded into the static location, this small overhead translates into about 3 stack accesses for free compared to static variables.
One small design oversight in the Program Status Word of the 8085 is however quite annoying. The flags register contains a single bit that always reads as 0. A $FFFF pushed to AF is read back as $FF7F. This means that unlike in the Z80, it is not possible to use a POP AF , PUSH AF pair as a temporary stack store, which invalidates AF as one of the only 3 additional 16-bit registers as an option, making things even tighter when juggling the stack. I’d call it annoying AF.
The RL DE and SUB HL,BC instructions are very useful to build 16-bit multiply and divide routines effectively. They have contributed to useful optimisations of these primitives. The saving in bytes over equivalent 8080 implementations has allowed for partial loop unrolling, which also speeds up the routines by reducing loop overhead. Initially, I was concerned that the SUB HL,BC function didn’t include the Carry flag. But in hindsight it is not possible to effectively carry into the registers, and using the 8 bit SUB A,C , SBC A,B instructions via the A register is the way to manage long arithmetic.
The next challenge was to build a CP/M-IDE version for the 8085 CPU. The ingredients are ACIA serial drivers adapted for 8085, IDE and diskio drivers for 8085, and the ChaN FatFs library compiled for 8085, plus a 8085 adapted BIOS.
When looking at the IDE drivers written previously for Z80 it was obvious that I’d gone out of my way to use Z80 instructions, which were actually slower than using 8080 instructions. So, I took the opportunity to rewrite an integrated solution for both Z80 and 8080/8085, for future maintenance.
The new CP/M-IDE 8085 code is very similar to the existing ACIA and SIO serial Z80 code, by design. I’ve tried to minimise the differences where ever possible. The remaining differences are mainly in the BIOS code, and relate to initialisation of the 8085 interrupts and the different CRT code used between Z80 and 8085 systems.
The 8080 CPU stands at the root of microprocessor development over the past 50 years. Although it was the first commercially successful device, it was followed quickly by two different processors with different bus characteristics. This is a record of interfacing one of the descendants, the Intel 8085, with peripherals and modules designed for use with the other descendant, the Zilog Z80.
All three of these devices, the 8080, the 8085, and the Z80 were implemented with 40-pin DIP packaging, which limited the number of pins they could use for bus signalling. The 8080, requiring 3 power supply voltages, was particularly limited as it didn’t multiplex the address or data lines, but rather needed to share the data lines for status information. More about the 8080 can be read at Wikipedia, or CPU Shack. I will not add to it here.
Derived from the 8080 and implemented by the same lead designers and architects, the Zilog Z80 uses four lines to signal general timing on the bus. In addition, a M1 line is used to signal that an interrupt is being processed and that an interrupting peripheral needs to provide an address (or vector) to which the CPU should jump in IM2 mode.
The Z80 rationalised the power requirements down to +5V and GND, which allowed a simpler and more explicit set of bus controls to be provided. As the Z80 implemented two address spaces, one for memory and one for Input/Output ports, it was useful to have two separate lines signalling memory access and Input/Output access. In this way a peripheral only needed to handle one of the two signals, depending on whether it was memory or a I/O address space peripheral device.
In addition the Z80 has two lines providing signalling for Read or Write. The timing was designed so that the data on the 8 data lines was valid at the point when the respective signal was deasserted. The Z80 would hold data it wanted to write or output until the write signal was deasserted, and it would latch and read the bus when reading or inputting data when the read signal was deasserted.
With only minor differences, the Memory and Input/Output lines are operated with similar timing, and this is aligned mostly with the Read and Write signals. This enabled system designers to build very simple bus interfacing for their Z80 based systems.
There are many additional features and alternatives here, around Interrupt Mode 2, timing for sampling the Ready pin which causes the Z80 to pause, and other minor timing issues. However, they are not relevant for most purposes.
Most system designers used these four signals to create memory write, memory read, I/O write and I/O read signals. Then one signal line, together with a chip-select generated by the address lines (directly in simple systems, or through logic in more complex systems) was enough to operate each component of the system.
For the 8085, the Intel architects took the bus interface in another direction. They integrated several components from the support chips for the 8080 into the silicon die, and produced new features which made the 8085 much more useful as a micro-controller than the Z80. For the bus, the major change was to multiplex the data lines with the low address lines. This step allowed them to reuse the 8 saved lines on the 40-pin DIP for other purposes.
Multiplexing the address and data lines meant that they had to add an external address latch, to capture the lower address values, before either writing data or reading data from the bus. The normal read and write lines are present and they behave in a similar manner to the Z80.
In a significantly different solution to the Z80, the 8085 uses only one line to differentiate Input/Output and Memory addresses. Using the sense of the line high or low to indicate whether the I/O address space or the memory address space is being addressed. The timing on this IO/M line is also substantially different to the Z80, where here it is valid for the entire cycle of an instruction. It does not become valid when the bus address is valid, rather it is valid from the start of the instruction through to the completion of the instruction.
This is the first significant divergence from the Z80 system bus, and it causes issues with peripherals that require an enabling signal to be provided after the address lines are stable. In most designs a decoder was required to produce signals for attached peripherals.
Generating Z80 /IORQ and /MREQ from 8085 signals
As many Z80 standard peripherals and also Motorola peripherals need to have the /IORQ line valid when the address is stable, we need to generate a Z80 compatible /IORQ (and /MREQ) signal. There are textbook “decoder” circuits available to produce the four system signals /IOR /IOW, /MEMR and /MEMW from the 8085 IO/M signal and /RD, /WR, but there is no standard solution for using the 8085 on the Z80 bus. This problem we are going to solve.
From the Z80 datasheet the /IRQ and /MREQ signals are almost exactly tied to the timing of the /RD and /WR signals. Therefore we can use /RD and /WR with some combinational logic to produce mostly correct timing for /IORQ and /MREQ. We need to have a valid signal when either /RD or /WR is low (active). If both are high, then the result should be also high (inactive). Both /RD and /WR are never active, but for convenience we can let the result be active if both are. In positive logic this would be generated by an OR gate. But with inverted logic (active low) this is implemented as an AND gate.
Result – /RD./WR
0 – Invalid state.
Intermediate Truth Table
To generate the /MREQ signal we are looking for the time when IO/M is low whilst either /RD or /WR is low. In negative logic this is an OR gate, where the signal remains high unless both /MREQ and /RD or /WR are low. So to generate /MREQ we need to provide ( /RD AND /WR ) OR IO/M.
0 – Only when both are active.
To generate the /IORQ signal we can recognise that it is simply the same /RD /WR logic but the IO/M line needs to be inverted or NOT converted. So we can generate /IORQ by ( /RD AND /WR ) OR NOT IO/M.
From this solution we can simplify the expression into either NAND or NOR gates. Taking NAND gates as the basis the solution can be simplified into 4 gates that can fit into a 7400 device.
Other Bus Timing Issues
Several Z80 peripherals use the READY signal to cause the Z80 to wait until they are ready to read data from the bus, or to write data onto the bus. The Z80 implements one wait state whenever it uses I/O instructions, to enable slow peripherals sufficient time to signal they are not READY to proceed. The 8085 does not add in the automatic wait state, so there may not be sufficient time for them to signal the CPU to wait. There are standard circuits available to add one wait state into 8085 bus cycles.
Motorola bus peripherals use an E or Enable clock to signal that they are being addressed. For the Z80 bus, this is typically implemented by inverting the /IORQ signal. However, for the 8085 using the method above, there may be insufficient time between the E (inverted /IORQ) and stabilisation of the address.
Z80 peripherals capable of Interrupt Mode 2 use the M1 signal to determine when they should place their interrupt address (vector) on the bus. The 8085 does not generate this signal, but since the 8085 does not support IM2 mode anyway this point is probably moot.
8085 CPU Module for RC2014
8085 CPU Module PCBs are available on Tindie. Combine with a Memory Module PCB to save postage.
The RC2014 Bus and Modules have been available now for some time, and the Z80 nature of the system bus provides for simplicity in the system design. There is no buffering or conversion by the CPU Module, and individual peripheral Modules are left to convert bus (or Z80) signals to suit their own requirements.
In researching the requirements for a 8085 CPU Module to work with the RC2014 Z80 bus and standard peripheral Modules, I found the Glitchworks 8085 SBC and also Alan Cox’s 8085 designs. My initial design replicated the bus interface signalling of these two designs.
After building the first version of the 8085 CPU Module I found that the Motorola 68B50 ACIA based RC2014 Serial Module didn’t work properly. This is because on the module the required E clock is derived from Z80 /IORQ timing, and the simple method of inverting IO/M as /IORQ doesn’t provide the timing needed. The 68B50 requires the bus address to be stable before E (or /IORQ inverted) is asserted.
A second version of the 8085 CPU Module was implemented, using the above method for generating the /IORQ and /MREQ signals.
The current hardware doesn’t supply a wait state to the CPU, so the hardware interface to the APU Module designed for RC2014 doesn’t work. The 8085 CPU allows only 25ns to 30ns (depending on the manufacturer specification) for assert not READY (or /WAIT). Am9511A takes 83ns to assert /WAIT.
The retro-challenge is to extend the current 8085 CPU Module design to include a wait state generator for IO instructions to support the APU Module and the UX Module.
Retrochallenge – 1st Update – 2nd October
Getting to Am9511A APU support for the RC2014-8085 machine means firstly getting the fundamental 8085 platform working.
The RC2014 is supported by the “newlib” of Z88DK, which is meant for Z80, Z180, Z80N (Spectrum Next) processors, and the 8085 is supported by the “classic” library. So this is the first time that a newlib machine is using classic lib libraries. Confusing? Yes I find it so.
Anyway the trick is just getting the right pieces to link together. Having ZIF ROM and TL866CS Programmer helps with fast programming cycles.
Retrochallenge – 2nd Update – 3rd October
Now the z88dk RC2014-8085 ROM build using the ACIA Serial Module is working (along with the RAM build supported by Basic), I’ve spent the past days tidying the ACIA builds around my various repositories, to keep everything consistent. So now my BASIC builds for both 8085 and Z80 are aligned with RC2014 HexLoadr BASIC, CP/M-IDE ACIA, and also the z88dk ACIA newlib device code. Also took the time to clean up some the SIO device code too.
@suborb is working on the z88dk classic library crt0 and compiler intrinsics, as they’ve been stuck in both classic and newlib and are a bit disorganised. Hopefully the result will be one set that can be used for both compilers (zsdcc, and sccz80) and both libraries, across multiple machines (8080, 8085, GBZ80, Z80, Z180, Z80N, etc) which will make maintenance much easier.
Waiting now for China to come back from National Day holiday, so I can get started with new hardware.
8085 Wait State Generator
Retrochallenge – 3rd Update – 8th October
As noted above the window of opportunity for a 8085 bus peripheral to signal not READY is very short. In fact is is no more than 30ns from fall of the ALE signal, and this is 30ns before the /IORQ signal is even enabled.
Timing information from the 8085 datasheet shows tLRY as maximum 30ns, and tLC as minimum of 60ns.
To be able to connect devices designed for the Z80 bus to the 8085 CPU we will need to implement a wait state generator. In the best case this will only affect I/O cycles, and will not slow down normal memory read and write cycles.
Designing the 8085 /IORQ Wait State Generator
As the need to generate a wait state was well known at the time of release of the 8085, several sources include the information required for the design of a basic solution. It is left to the reader to determine how to use the created wait state though.
For our purposes we need to have a wait state generated only for peripheral devices, accessed using the I/O instructions. Therefore we can modify the above circuit to only generate a wait state when the I/O address space is active, or when the external Z80 bus /WAIT signal is active. The below circuit produces a /READY signal that provide 1 wait state whenever the I/O address space is active, and can continue to produce wait states until the /WAIT signal is de-asserted.
As the static RAM / EEPROM memory devices we are using are not sensitive to the timing of the /MREQ signal, the NAND gates assigned to generate a correct Z80 /MREQ have been recovered and reused in the implementation of the wait state generator. Therefore the revisions required only one additional device on the PCB. Based on this design a revised 8085 CPU Module was created, and ordered. Due to arrive around October 18th, which won’t leave much time to finish before the end of the RetroChallenge. It will be a rush, as usual.
The new 8085 CPU Module PCB arrived, so wasting no time I’ve build one up to test. And it works!
It is interesting to look at the signals actually appearing on the RC2014 Bus during the operation of the APU. Here we have a floating point read from the APU, 4 bytes, where the wait state generator produces sufficient delay (1 wait state) to allow the APU to generate its own /WAIT signal for the last two bytes.
The floating point write cycle is similar but the duration of the /WAIT signal from the APU is longer, and the APU needs to assert it on every byte written. Note that tRYH is 0ns, so there is no need to hold the /READY signal beyond the clock rise point.
To support the Am9511A APU Module the /WAIT signal has to be patched to the USER1 Pin (if using the standard RC2014 backplane), which allows the Am9511A to extend the single wait state generated by the 8085 CPU Module for as long as the APU needs.
I’ve prepared a specific version of MS Basic 4.7 for the 8085 CPU Module when used with the Am9511A APU Module. Initial testing is working. It is looking very good to achieve the RetroChallenge goals. Please read further at the 8085 Software post for more information.
With the Wait State generator functioning, it is now possible to use the UX Module for a VGA screen and PS/2 Keyboard.
Retrochallenge – 6th (Final) Update – 26th October
Rework of z88dk classic 8080/8085/gbz80 library l_ functions.
When working with the 8085, the biggest issue is the continual pressure on the few CPU registers. Alongside the 8-bit accumulator a register and the 16-bit accumulator hl registers we have only two additional register pairs that can be used, the bc and de registers. This gives the system programmer few options but to use static memory locations to store intermediate values, which leads to non-reentrant code.
Having non-reentrant code is normally not a problem, but it does lead to issues when multiple threads (or tasks) are trying to use the CPU at the same time, for example when a multi-tasking operating system is to be supported. So it is useful to try to build reentrant functions that use the stack for storage of intermediate values, rather than static memory locations.
The designers of the 8085 had this in mind when they designed the additional functions found the 8085 silicon. The “new” instructions make it very efficient to build stack relative functions (compared to the 8080), and this relieves some pressure on the small number of registers.
However, there was one oversight made by the designers, as the 8085 af register pair cannot be used, in contrast to the z80, to pop and push arbitrary words on the stack. This reduces the number of available 16-bit registers by 1 of possible 4. There is one flag bit that always reads as 0, which is an subtle but annoying limitation of the 8085.
As background, some of these functions originate from the 1980s and 1990s in the Amsterdam Compiler Kit, and haven’t been updated or improved for the past 20 years. They weren’t broken. But they were in need of some attention.
So this update is the final one in the October 2021 RetroChallenge. All the new functions are checked in and are now part of the z88dk.
My club uses pneumatic systems to turn the ISSF Targets, which are controlled by a timing system. One of the members asked me to help build a phone interface for the systems.
The systems are used for many courses of fire, and there are quite a few options to manage. On the front panel there is a RESET, which is tied to the CPU RESET, and a FACE button which returns the targets to face the shooter for scoring.
It turns out that the retired systems are based on a 8085 CPU, in the classic minimum configuration with an 8155 providing 256 Bytes of RAM, and input and output ports. There is a 2732 UV PROM holding the program.
So, how do we get these devices online? My thoughts are to add a serial port so that the system can be controlled remotely, then to use an additional WiFi enabled device which can present a web interface to the Range Officer to control proceedings.
First step is to see what is going on under the hood here. So using the TL-866 the binary code on the ROM was read, and then using z88dk-dis the existing code could be interpreted.
It was interesting to see a very simple method of operation in the existing ROM. The system can only change course of fire if it is RESET, when it reads the position of the switches, and then halts awaiting an interrupt to trigger the course of fire. When the string is finished it will return to repeat the same course of fire.
Timing was based on a delay circuit providing 500ms of delay per unit. Perhaps it is not 100% accurate, but good enough for the application.
I believe that I found a bug that has been latent in the device for the last 40 years. It seems that an address byte was reversed, which would cause a jump into empty addresses. Not sure why no one realised that previously.
Building Serial Interface
I’m planning to build a simple serial interface which will read a character, and then change the course of fire based on that character. Initialising the course of fire can be then done by the web interface, by triggering an interrupt, or by using the wired front panel interface.
After asking the experts I learned that the SID/SOD pins on the 8085 can be used as a bit-bang serial port. In fact that is the standard way of building a serial port for early systems. The code for building serial transmission is included in the early application notes.
The serial code works perfectly at 9600 baud on this 3MHz system. Since only one character will be received and a few transmitted on boot, there are no performance issues to consider.
I’ve written the upgrade code to replicate the front panel selection process, and to allow the system to behave exactly as before when no serial input is available. When a serial command is available, which is triggered by activity on the RST6.5 line, then the system will set a different course of fire than is shown on the front panel. The string can be triggered either by the front panel, or by the interrupt related to the serial interface.
ESP-32 Web Interface
Following a bit of a search the Adafruit HUZZAH32 Breakout presented itself as the best solution to web enable the Target Turner. It can be powered by 5V, and the RX is protected against 5V input by a diode.
The physical interface is going to be a FTDI Basic style connector. Using this connector will allow me to best the 8085 first, and then build the web interface and test separately from the Target Turner. The last step will be to integrate the two devices into a system.
Using the simple serial character interface, it should be possible to present an active web page to the Range Officer.
There are many, eg, tutorials on how to build active web pages using the ESP-32 and WebSockets.
Over the past few years I’ve implemented a number of interfaces for Z80 peripherals based on the principal of the interrupt driven ring buffer. Each implementation of a ring exhibits its own peculiarities, based on the specific hardware. But essentially I have but one ring to bring them all and in the darkness bind them.
This is some background on how these interfaces work, why they’re probably fairly optimal at what they do, and things to consider if extending these to other platforms and devices.
The ring buffer is a mechanism which allows a producer and a consumer of information to do so with a timing to suit their needs, and to do it without coordinating their timing.
The Wikipedia defines a circular buffer, or ring buffer, as a data structure that uses a single fixed-size buffer as if it were connected end-to-end. The most useful property of the ring buffer is that it does not need to have its elements relocated as they are added or consumed. It is best suited to be a FIFO buffer.
More recently, I’ve been working with Z80 platforms and I’ve taken that experience into building interrupt driven ring buffer mechanisms for peripherals on the Z80 bus. These include three rings for three different USART implementations, and a fourth ring for an Am9511A APU.
But firstly, how does the ring buffer work? For the details, the Wikipedia entry on circular buffers is the best bet. But quickly, the information (usually a byte, but not necessarily) is pushed into the buffer by the producer, and it is removed by the consumer.
The producer maintains a pointer to where it is inserting the data. The consumer maintains a pointer to where it is removing the data. Both producer and consumer have access to a count of how many items there are in the buffer and, critically, the act of counting entries present in the buffer and adding or removing data must be synchronised or atomic.
8 Bit Optimisation
The AVR example code is written in C and is not optimised for the Z80 platform. By using some platform specific design decisions it is possible to substantially optimise the operation of a general ring buffer, which is important as the Z80 is fairly slow.
The first optimisation is to assume that the buffer is exactly one page or 256 bytes. The advantage we have there is that addressing in Z80 is 16 bits and if we’re only using the lowest 8 bits of addressing to address 256 bytes, then we simply need to align the buffer onto a single 256 byte page and then increment through the lowest byte of the buffer address to manage the pointer access.
If 256 bytes is too many to allocate to the buffer, then if we use a power of 2 buffer size, and then align the buffer within the memory so that it falls on the boundary of the buffer size, the calculation for the pointers becomes simple masking (rather than a decision and jump). Simple masking ensures that no jumps are taken, which means that the code flow or delay is constant no matter which place in the buffer is been written or read.
Note that although the number of bytes allocated to the buffer is 256, the buffer cannot be filled completely. A completely full 256 byte buffer cannot be discriminated from a zero fullness buffer. This does not apply where the buffer is smaller than the full page.
With these two optimisations in place, we can now look at three implementations of USART interfaces for the Z80 platform. These are the MC6580 ACIA , the Zilog SIO/2, and the Z180 ASCI interface. There is also the Am9511A interface, which is a little special as it has multiple independent ring buffers, and has multi-byte insertion.
To start the discussion, let us look at the ACIA implementation for the RC2014 CP/M-IDE bios. I have chosen this file because all of the functions are contained in one file, which provides an easier overview. The functions are identical to those found in the z88dk RC2014 ACIA device directory.
Using the ALIGN key word of the z88dk, the ring buffer itself is placed on a page boundary, in the case of the receive buffer of 256 bytes, and on the buffer size boundary, in the case of the transmit buffer of 2^n bytes.
Note that although where the buffer is smaller than a full page all of the bytes in the buffer could be used, because the buffer counter won’t overflow, but I haven’t made that additional optimisation in my code. So no matter how many bytes are allocated to a buffer, one byte always remains unused.
Once the buffer is located, the process of producing and consuming data is left to either put or get functions which write to, or read from the buffer as and when they choose to. There is no compulsion for the main program flow to write or read at a particular time, and therefore the flow of code is never delayed. This is optimum from the point of view of minimising delay and maximising compute time. Additional functions such as flush, peek, and poll are also provided to simplify program flow, and init to set up the peripheral and initialise the buffers on first use.
With the buffer available then the interrupt function can do its work. Once an interrupt from the peripheral is signalled, the interrupt code checks to see whether a byte has been received. If not then the interrupt (in the case of the ACIA and ASCI) must have been triggered by the transmit hardware becoming available.
If in fact a byte has been received by the peripheral then the interrupt code recovers the byte, and checks there is room in the buffer to store it. If not, then the byte is simply abandoned. If there is space, then the byte is stored, and the buffer count is incremented. It is critical that these two items happen atomically, which in the case of an interrupt is the natural situation.
If the transmission hardware has signalled that it is free, then the buffer is checked for an available byte to transmit. If none is found then the transmit interrupt is disabled. Otherwise the byte is retrieved from the buffer and written to the transmit hardware while the buffer count is decremented.
If the transmit buffer count reaches zero when the current byte is transmitted, then the interrupt must disable further transmit interrupts to prevent the interrupt being called unnecessarily (i.e. with the buffer fullness being empty).
Both the SIO and ASCI have multi-byte hardware FIFO buffers available. This is to prevent over-run of the hardware should the CPU be unable to service the receive interrupt in sufficient time. This could happen if the CPU is left with its general interrupt disabled for some time.
One additional feature worth discussing is the presence of a transmit cut-through, which minimises delay when writing the “first byte”. Because the Z80 processor is relatively slow compared to a serial interface, it is common for the transmit interface to be idle when the first byte of a sequence of bytes is written. In this situation writing the byte into the transmit buffer, and then signalling a pseudo interrupt (by calling the interrupt routine) would be very costly. In the case of the first byte it is much more effective simply to cut-through and write directly to the hardware.
For the ring buffer to function effectively, the atomicity of specific operations must be guaranteed. During an interrupt in Z80 further interrupts are typically not permitted, so within the interrupt we have a degree of atomicity. The only exception to this rule is the Z80 Non Maskable Interrupt (NMI), but since this interrupt is not compatible with CP/M it has never been used widely and is therefore not a real issue.
Across the three implementations there are three different Z80 interrupt modes in play. The Motorola ACIA is not a Zilog Z80 peripheral, so it can only signal a normal interrupt, and can therefore (without some dirty tricks) only work in Interrupt Mode 1. For the RC2014 implementation it is attached to INT or RST38 and therefore when an interrupt is triggered it is up to the interrupt routine to determine why an interrupt has been raised. This leads to a fairly long and slow interrupt code.
The Z180 ASCI has two ports and is attached to the Z180 internal interrupt structure, which works effectively similarly to the Z80 Interrupt Mode 2, although it is actually independent from the Z80 interrupt mode. Each Z180 internal interrupt is separately triggered, however it still cannot discern between a receive and a transmit event. So the interrupt handling is essentially similar to that of the ACIA.
The Zilog SIO/2 is capable of being attached to the Z80 in Interrupt Mode 2. This means that the SIO is capable of being configured to load the Z80 address lines during an interrupt with a specific vector for each interrupt cause. The interrupts for transmit empty, received byte, transmit error, and receive error are all signalled separately via an IM2 Interrupt Vector Table. This leads to concise and fast interrupts, specific to the cause at hand. The SIO/2 is the most efficient of all the interfaces described here.
For interest, the Am9511A interface uses two buffers, one for the one byte commands, and one for the two byte operand pointers. The command buffer is loaded with actions that the APU needs to perform, including some special (non hardware) commands to support loading and unloading operands from the APU FILO.
A second Am9511A interface also uses two buffers, one for one byte commands, and one for either two or four byte operands. This mechanism in not as nice as storing pointers as in the above driver, but is required for situations where the Z180 is operating with paged memory.
I’ve revised this above solution again and do it with three byte operand (far) pointers, as that makes for a much simplified user experience. The operands don’t have to be unloaded by the user. They simply appear auto-magically…