The ultimate targets of this project are embedded systems, mainly medical devices, and the desktops used to develop them. Nothing else matters.
NIH (Not Invented Here) Syndrome is a death knell
So many projects took an "I'll write my own" approach to software development that it is tragic. During the days of DOS and OS/2 we had little choice. Some of the concepts from Qt and CopperSpice will be given new definitions in BasisDoctrina. Garbage collection being a main one. Signals & Slots will go away, being replaced by both immediate mode, along the lines of NanoGUI-SDL, and publish-subscribe message queuing. We will be using SDL3 to provide the underlying I/O, graphics, and other rock solid functionality. SDL3 has a laundry list of functionality in its API. Many games and embedded devices are using this library directly. It's the perfect foundation for a widget based class library, allowing developers to focus on things that really matter from an embedded system and/or desktop application point of view.
Agile will not be used
Contrary to popular believe, Agile is not
Software Engineering. Read this
book if you want to understand why. Automated testing generally
tests nothing. It certainly doesn't test on actual hardware with a user
looking at it so see what is wrong.
Phones will not be supported
The Qt project completely lost its way pursuing the phone market. QML is still a train wreck despite them starting on it way back in Qt 4.x. You cannot build a reliable product for a regulated or SAFETY industry using a language that does not have strong typing. If the same variable can have 1234.56 the first time through a calculation and "Mary had a little lamb" the next time, life will be at risk. Do you want it to be yours?
The Big Picture
Long ago I wrote a book on Service Oriented Architecture that covered DEC ACMS. Given DEC's lack of creativity in naming Application Control Management System is what ACMS stands for. Basically you created an ACMS application server to run and manage everything. Your actual process got split into restartable units of work. Each unit was programed into a Task Server. The application server would start/stop task servers so the minimum configured would always be present and it would limit that number to the maximum. In short it provided throttle control and filled a Benevolent Overlord role. We won't go quite that far, but that is the conceptual design. The world of Docker and other container systems are trying to re-invent this wheel.
Why?
The single Main Event Loop that must exist and the fact any thread you create will have affinity for the CPU core of its parent thread. Effectively this means Qt, CopperSpice, insert-your-favorite-framework-here, effectively limits your program to using one core of your 8/16/20+ core machine. Container packages like Docker theoretically allow you to assign a core to a specific container allowing you to use all of your cores.
One of the reason so many non-graphic applications are using GPU is, when properly split, you can get around this single thread design by having hundreds of GPU cores do calculations or whatever.
So, we are going to try for an optional Benevolent Overlord named bd_application. It will provide a publish-subscribe (possibly more than one) message queue and check the heartbeat of tasks. It will also launch tasks, spreading them out across available cores. The way you get around threading problems with graphics backends is to have real processes, not threads.
bd_task will be the real challenge. In effect, it needs to be a main() that can be run on its own. It also needs to have the ability to recognize it was launched as a child process of a bd_application. The task may be a GUI or a simple console/headless task.
Ideally we should be able to extend this relationship via persistent message queues stored on disk. If your tasks are properly designed restartable units of work, the computer could crash and when restarted should pick up where it left off.