Porting application to linux

The story of how I created a way to port Windows Apps to Linux

Some day during a weekend sometime around the summer in 2018 I was doing house chores while listening to a podcast.

The podcast I was listening to is called Coder Radio, and I was specifically listening to episode #322 Not so QT.

That episode is about using QT to develop a cross-platform GUI for a .NET application. In the end they decided to give up on the idea, mainly because it was very complicated to setup, required it to be developed on Windows (QT does not support cross compilation) and in the end the license was prohibitively expensive.

When I heard this I though, humm, I think I know of a way to solve this problem. I think I can come up with a solution that would work well in this context, specifically for business applications where memory usage is not too constrained.

A bit presumptuous and naive of me to think like this? Perhaps, but let me take you through that journey. I promise it won’t disappoint.

windows logo and tux with an arrow going from windows to tux

The idea

.NET does not have a solution for developing cross-platform GUIs. There are a few options, but they are not easy to set up and develop for.

On the other hand there’s a technology that has been super popular for developing cross-platform apps which is Electron.

Electron has been heavily criticized because of its heavy memory use (mostly because of Slack), but there are great applications written in it that feel super smooth (VSCode) and are probably responsible for enabling people to be able to choose a different operating system than what they normally use.

The problem is, you can’t develop using .NET in Electron, it’s all JavaScript and Node.js (I know, I know, there’s Electron.NET, but trust me, what I’m talking about here is completely different).

So the idea was, if Electron is basically Node.js and we can start a .NET process from Node why can’t we use Electron to build the UI and have all the behavior written in .NET. We just need a (non-convoluted) way of sending commands/requests between Node and .NET and it all should work, right?

Turns out that yes, it works and you probably already use this approach all the time.

Any time you pipe the output of a command to another in the shell, you are basically using the same idea I’m going to describe next.

And if you are skeptical about how robust this is, let me tell you that people do database restores/backups using this technique (e.g.: cat backup.archive | mongorestore —archive ).

Ok, no more beating around the bush: the idea is to use the stdin and stdout streams to create a two way communication channel between two processes, in this case between Node.js and .NET.

In case these streams are news to you, the stdin (standard input stream) is normally used to read data from the terminal (like when a program asks you for input) and the stdout (standard output stream) is where you write to in your program to get data to show up in the terminal. These can be redirected (piped) so that the output of one becomes the input of the other.

Читайте также:  Настройка geany python linux

Node.js has a module named child_process that contains a function, spawn , that we can use to spawn new processes and grab hold of their stdin , stdout and stderr streams.

When using spawn to create a .NET process we have the ability to send data to it through its stdin and receive data from it from its stdout .

Here’s how that looks like:

const spawnedProcess = spawn('pathToExecutable', [arg1, arg2]); spawnedProcess.stdin.write('hello .NET from Node.js'); spawnedProcess.stdout.on('data', data => < //data from .NET; >); 

Very simple idea, very few moving parts and very simple to set up.

Obviously, the code above in that form is not very usable. Here’s an example of what I ended up creating:

const connection = new ConnectionBuilder() .connectTo('DotNetExecutable') .build(); connection.send('greeting', 'John', (err, theGreeting) => < console.log(theGreeting); >); 

The code above sends a request to .NET of type “greeting” with argument “John” and expects a response from .NET with a proper greeting to John.

I’m omitting a lot of details here, namely what actually gets sent over the stdin / stdout streams but that’s not terribly important here.

What I left out and is important is how this works in .NET.

In a .NET application it’s possible to get access to its process’ stdin and stdout streams. They are available through the Console ‘s properties In and Out .

The only care that is required here is reading from the streams and keeping them open. Thankfully StreamReader supports this through an overload of its Read method.

Here’s how all that ended up looking in the first implementation of this idea in .NET:

var connection = new ConnectionBuilder() .WithLogging() .Build(); // expects a request named "greeting" with a string argument and returns a string connection.On("greeting", name => < return $"Hello !"; >); // wait for incoming requests connection.Listen(); 

First experiments

I called the implementation of this idea ElectronCGI (which is probably not the best of names given that what this idea really enables is to execute .NET code from Node.js).

It allowed me to create these demo applications where the UI was built using Electron + Angular and/or plain JavaScript with all non-ui code running in .NET.

PostgreSQL database records browser:

On that last one on every keystroke a query is being performed and the results returned and rendered. The perceived performance is so good that it totally feels like a native application, and all the non-UI code is .NET in both examples.

One thing that might not be obvious by looking at the examples is that you can maintain the state of your application in .NET.

One approach that is common with Electron apps is to use Electron to display a web
page, and the actions you perform end up being HTTP requests to the server that hosts that web page. That means you have to deal with all that is HTTP related (you need to pick an port, send http requests, deal with routing, cookies, etc etc).

With this approach however, because there’s no server and the .NET process “sticks” around you can keep all your state there, and setup is super simple, literally two lines in Node.js and .NET and you you can have the processes “talking” to each other.

All in all, this gave me confidence that this idea was good and worth exploring further.

Читайте также:  Сборки линукс для ноутбука

Pushing on, adding concurrency and two-way communication between the processes

At the time of these demos it was possible to send messages from Node.js to .NET, but not the other way around.

Also, everything was synchronous, meaning that if you sent two requests from Node.js and the first took one minute to finish, you’d have to wait that full minute before you got a response for the second request.

Because an image is worth more than a thousand words here’s how that would look visually if you sent 200 requests from Node.js to .NET and where every request took an average of 200ms to complete:

Enabling request running concurrently involved dealing with concurrency. Concurrency is hard.

This took me a while to get right but in the end I used the .NET Task Parallel Library’s Data Flow library.

It is a complicated subject and in the process of figuring it out I wrote these two blog posts, in case you are curious about DataFlow here they are: TPL Dataflow in .Net Core, in Depth – Part 1 and Part 2.

This is how much better the example above is when requests can be served concurrently:

The other big feature that was missing was to be able to send request from .NET to Node.js, previously all it was only possible to send a request from Node.js with an argument and get a response from .NET with some result.

connection.send('event.get', 'enceladus', events => < //events is a list of filtered events using the filter 'enceladus' >); 

This was enough for simple applications but for more complex ones having the ability to have .NET send requests was super important.

To do this I had to change the format of the messages that were exchanged using the stdin and stdout streams.

Previously .NET’s stdin stream would receive requests from Node, and responses to those requests were sent using its stdout stream.

To support duplex communication the messages included a type, which could be REQUEST of RESPONSE, and later on I added ERROR as well and also changed the API, in Node.js:

connection.send('requestType', 'optionalArgument', (err, optionalResponse) => < //err is the exception object if there's an exception in the .NET handler >); //also added the ability to use promises: try < const response = await connection.send('requestType', 'optionalArg'); >catch(err) < //handle err >//to handle request from .NET: connection.on('requesType', optionalArgument => < //optionally return a response >); 
connection.On("requestType", (T argument) => < //return optional response >); //and to send: connection.Send("requestType", optionalArgument, (T optionalResponse) => < //use response >); // there's also an async version: var response = await connection.SendAsync("requestType", optionalArgument); 

Proof: Porting a windows store application to Linux

When I first started with this idea I imagined a good proof that it would be viable would be to pick an application that was built using MVVM and be able to take the ViewModels, which are (should be) UI agnostic, and use them, unaltered, in an application using this approach.

Thankfully I had a game I built for the Windows Store around 2014 for which I still had the source code for. That game was named Memory Ace and you can still find it in the Windows Store here.

Memory ace screenshot

Turns out I was able to re-use all of the code to create the cross-platform version with no problems. Here it is running on Ubuntu:

I also was able to run it on Windows with no problems. I don’t own a Mac so I could not try it there.

Читайте также:  Полные права папку linux

If you want to have a look at the source code, you can find it here. Also, the source for ElectronCGI is here for Node.js and here for .NET.

You can also see here how easy it is to setup a project with ElectronCGI (using an outdated version, but the process is identical).

Источник

Porting windows application to linux

I have an application which I need to port on Linux. I was thinking to use Free Pascal the problem is the application uses Windows API’s to perform tasks such as serial port communication etc. Is there a msdn for linux users or a book covering how linux works internaly if there are apis. I am very confused.

5 Answers 5

Well, it’s sad to say but if your code in very Windows-dependend (not VCL depended!) then probably it’ll be faster to write the application from the begining rather then porting it.

But if it’s only serial port matter then give a try to multiplatform SynaSer library, find it here: http://synapse.ararat.cz.

And you could use portable toolkit libraries (like Gtk or Qt); this would help coding a Linux application easy to port to MacOSX and to Windows.

Well, pure VCL is fine — you can use your application with Lazaru’s LCL in many flavours of systems with virtually no coding needed, just import Delphi project into Lazarus 🙂 There is also the Code Ocean project which is Lazarus plus a lot of components and libraries ported (including Turbo Power and so on).

By the way — there are rumors on the net, that Delphi XE3 (or XE4) will have native Linux compiler — and at least FireMonkey ported to it. However status of the VCL is highly unclear mostly due to the Windowish depediences.

OP should modify his application to use synapse in windows, and once he has it working he should convert project to Lazarus.

Robert Love has a book on Linux Systems Programming — that will cover this area and Love’s books are generally good, so it is worth looking at.

It’s not entirely clear from your question, but if your concern is that there are specific calls to hardware controlling functions in your Windows application that make it difficult to port I would suggest that is a misplaced fear. Both Windows and Linux operate on the principle that the application level programmer should be kept away from the hardware and that all that should be handled by the operating system kernel and only be accessible to applications via system calls. As the kernels of different operating systems face similar demands from users/applications, they tend to have system calls that do the same sorts of things. It is necessary to match the system calls to one another but I can see no reason why that should be impossible.

What may be more troublesome is that your Windows application may rely heavily on the Windows executive’s windowing code/API. Again finding analogues for your code is not impossible but is likely to be a bit more complex e.g. in Linux this stuff is generally not handled in the kernel at all (unlike Windows).

But then again, your code may be written against a portable toolkit/library like Qt which would make things a lot easier.

Источник

Оцените статью
Adblock
detector