Discussion:
In memory Stream
(too old to reply)
DrPi
2024-02-16 09:41:12 UTC
Permalink
Hi,

I want to transfert some data between applications through a memory buffer.
The buffer transfert between applications is under control.
My problem is with the buffer content.
I though I'll use a Stream writing/reading in/from the memory buffer.
How can I achieve this ? I've found no example doing this.
Note : I use Ada 2012.

Nicolas
J-P. Rosen
2024-02-16 10:40:54 UTC
Permalink
Post by DrPi
Hi,
I want to transfert some data between applications through a memory buffer.
The buffer transfert between applications is under control.
My problem is with the buffer content.
I though I'll use a Stream writing/reading in/from the memory buffer.
How can I achieve this ? I've found no example doing this.
Note : I use Ada 2012.
I don't know if this is what you want, but at least it is an example of
using streams...
Package Storage_Streams, from Adalog's components page:
https://adalog.fr/en/components.html#Storage_Stream
--
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
https://www.adalog.fr https://www.adacontrol.fr
Dmitry A. Kazakov
2024-02-16 12:40:27 UTC
Permalink
Post by DrPi
I want to transfert some data between applications through a memory buffer.
The buffer transfert between applications is under control.
My problem is with the buffer content.
I though I'll use a Stream writing/reading in/from the memory buffer.
How can I achieve this ? I've found no example doing this.
It of course depends on the target operating system. You need to create
a shared region or memory mapped file etc. You also need system-wide
events to signal the stream ends empty or full.

Simple Components has an implementation interprocess streams for usual
suspects:

http://www.dmitry-kazakov.de/ada/components.htm#12.7
Post by DrPi
Note : I use Ada 2012.
No problem, it is kept Ada 95 compatible.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Pascal Obry
2024-02-16 12:49:54 UTC
Permalink
Hi,

AWS comes with a memory stream implementation.

https://github.com/AdaCore/aws/blob/master/include/memory_streams.ads

You may want to have a look here.

Have a nice day,
--
  Pascal Obry /  Magny Les Hameaux (78)

  The best way to travel is by means of imagination

  http://photos.obry.net

  gpg --keyserver keys.gnupg.net --recv-key F949BD3B
Lawrence D'Oliveiro
2024-02-16 21:54:36 UTC
Permalink
Post by DrPi
I though I'll use a Stream writing/reading in/from the memory buffer.
Wouldn’t be simplest to let the OS manage the buffering for you?

<https://manpages.debian.org/7/pipe.en.html>
Dmitry A. Kazakov
2024-02-18 10:06:11 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by DrPi
I though I'll use a Stream writing/reading in/from the memory buffer.
Wouldn’t be simplest to let the OS manage the buffering for you?
<https://manpages.debian.org/7/pipe.en.html>
That would make applications OS-dependent.
That’s a standard POSIX function. I think even M****s*ft W**d*ws has
something resembling it.
Yes, Windows has a POSIX layer which nobody ever uses.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
J-P. Rosen
2024-02-17 14:26:45 UTC
Permalink
The library Jean-Pierre pointed me to perfectly matches this usage.
Light and easy to use. Thanks.
:-)
One enhancement I see is to manage the buffer size to avoid buffer
overflow (or did I missed something ?).

I don't see what you mean here... On the memory side, we are
reading/writing bytes from memory, there is no notion of overflow. And
the number of bytes processed by Read/Write is given by the size of
Item, so no overflow either...
--
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
https://www.adalog.fr https://www.adacontrol.fr
DrPi
2024-02-17 14:42:13 UTC
Permalink
Post by J-P. Rosen
The library Jean-Pierre pointed me to perfectly matches this usage.
Light and easy to use. Thanks.
:-)
One enhancement I see is to manage the buffer size to avoid buffer
overflow (or did I missed something ?).
I don't see what you mean here... On the memory side, we are
reading/writing bytes from memory, there is no notion of overflow. And
the number of bytes processed by Read/Write is given by the size of
Item, so no overflow either...
A memory buffer IS limited in size. It is either a peripheral buffer or
a memory buffer you create yourself (my case). In either case, its size
is limited. When writing in the stream, you have to care to not overflow
the buffer.
J-P. Rosen
2024-02-17 18:52:00 UTC
Permalink
Post by DrPi
Post by J-P. Rosen
The library Jean-Pierre pointed me to perfectly matches this
usage. Light and easy to use. Thanks.
Post by DrPi
Post by J-P. Rosen
🙂
One enhancement I see is to manage the buffer size to avoid
buffer overflow (or did I missed something ?).
Post by DrPi
Post by J-P. Rosen
I don't see what you mean here... On the memory side, we are
reading/writing bytes from memory, there is no notion of overflow. And
the number of bytes processed by Read/Write is given by the size of
Item, so no overflow either...
Post by DrPi
A memory buffer IS limited in size. It is either a peripheral buffer
or a memory buffer you create yourself (my case). In either case, its
size is limited. When writing in the stream, you have to care to not
overflow the buffer.

The purpose of this stream is to access raw memory, so there is no
notion of "buffer size". It is up to you to match your (user) buffer
with the memory buffer. Of course, you can add a layer with all the
checks you want...

[PS] I tried to respond to your email, but it bounced...
--
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
https://www.adalog.fr https://www.adacontrol.fr
Lawrence D'Oliveiro
2024-02-18 00:02:33 UTC
Permalink
Post by DrPi
A memory buffer IS limited in size. It is either a peripheral buffer or
a memory buffer you create yourself (my case). In either case, its size
is limited. When writing in the stream, you have to care to not overflow
the buffer.
With pipes, the OS takes care of this for you. Once its kernel buffer is
full, further writes are automatically blocked until a reader has drained
something from the buffer.

It’s called “flow control”.
Dmitry A. Kazakov
2024-02-17 14:48:05 UTC
Permalink
Post by J-P. Rosen
On the memory side, we are
reading/writing bytes from memory, there is no notion of overflow.
In the Simple Components there is a pipe stream.

type Pipe_Stream
( Size : Stream_Element_Count
) is new Root_Stream_Type with private;

When a task writes the stream full (Size elements), it gets blocked
until another task reads something out.

Another implementation

type Storage_Stream
( Block_Size : Stream_Element_Count
) is new Root_Stream_Type with private;

rather allocates a new block of memory. The allocated blocks get reused
when their contents is read out.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Dmitry A. Kazakov
2024-02-17 14:28:54 UTC
Permalink
Concerning the OS and the buffer transfert mechanism, as I said, this is
under control. I use Windows and the WM_COPYDATA message.
My usage is a bit special. The writing process writes a bunch of data in
a memory buffer then requests this buffer to be transferred to another
process by way of WM_COPYDATA. The receiving process reads the data from
the "new" memory buffer. I say "new" since the address is different from
the one used in the writing process (of course it can not be the same).
You ask Windows to copy a chunk of memory from one process space into
another, so yes it is physically different memory. Different or same
address tells nothing because under Windows System.Address is virtual
and can point anywhere.

As you may guess it is a quite heavy overhead, not only because of
copying data between process spaces, but also because of sending and
dispatching Windows messages.

Note, that if you implement stream Read/Write as individual Windows
messages it will become extremely slow. GNAT optimizes streaming of some
built-in objects, e.g. String. But as a general case you should expect
that streaming of any non-scalar object would cause multiple calls to
Read/Write and thus multiple individual Windows messages.

An efficient way to exchange data under Windows is a file mapping. See
CreateFileMapping and MapViewOfFile.


https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createfilemappinga


https://learn.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-mapviewoffile

Then use CreateEvent with a name to signal states of the stream buffer
system-wide. Named Windows events are shared between processes.


https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-createeventa

[ This is how interprocess stream is implemented for Windows in Simple
Components ]
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
DrPi
2024-02-17 14:56:34 UTC
Permalink
Post by Dmitry A. Kazakov
Concerning the OS and the buffer transfert mechanism, as I said, this
is under control. I use Windows and the WM_COPYDATA message.
My usage is a bit special. The writing process writes a bunch of data
in a memory buffer then requests this buffer to be transferred to
another process by way of WM_COPYDATA. The receiving process reads the
data from the "new" memory buffer. I say "new" since the address is
different from the one used in the writing process (of course it can
not be the same).
You ask Windows to copy a chunk of memory from one process space into
another, so yes it is physically different memory. Different or same
address tells nothing because under Windows System.Address is virtual
and can point anywhere.
As you may guess it is a quite heavy overhead, not only because of
copying data between process spaces, but also because of sending and
dispatching Windows messages.
Note, that if you implement stream Read/Write as individual Windows
messages it will become extremely slow. GNAT optimizes streaming of some
built-in objects, e.g. String. But as a general case you should expect
that streaming of any non-scalar object would cause multiple calls to
Read/Write and thus multiple individual Windows messages.
An efficient way to exchange data under Windows is a file mapping. See
CreateFileMapping and MapViewOfFile.
https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createfilemappinga
https://learn.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-mapviewoffile
Then use CreateEvent with a name to signal states of the stream buffer
system-wide. Named Windows events are shared between processes.
https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-createeventa
[ This is how interprocess stream is implemented for Windows in Simple
Components ]
In my use case, there is no performance problem.
The purpose is to make an editor simple instance. When you launch the
editor the first time, everything is done as usual. Next time you launch
the editor (for example by double clicking on a file in file explorer)
the init code of the editor detects an instance of the editor is already
running, transfers the command line arguments to the first instance and
exit.
The buffer transfert occurs once when starting a new instance of the editor.

However, I keep your solution in mind. I might need it one day.
Thanks.
Simon Wright
2024-02-17 18:09:02 UTC
Permalink
Post by Dmitry A. Kazakov
Note, that if you implement stream Read/Write as individual Windows
messages it will become extremely slow. GNAT optimizes streaming of
some built-in objects, e.g. String. But as a general case you should
expect that streaming of any non-scalar object would cause multiple
calls to Read/Write and thus multiple individual Windows messages.
Our motivation for the memory stream was the equivalent of this for
UDP messages; GNAT.Sockets behaves (behaved?) exactly like this, so we
buffered the result of 'Output & wrote the constructed buffer to the
socket; on the other side, we read the UDP message, stuffed its contents
into a memory stream, then let the client 'Input.

I can't remember at this distance in time, but I think I would have
liked to construct a memory stream on the received UDP packet rather
than copying the content; the compiler wouldn't let me. Perhaps worth
another try.
Simon Wright
2024-02-18 10:06:46 UTC
Permalink
UDP is a kind of thing... Basically, there is no use of UDP except for
broadcasting, e.g. in LAN discovery.
Worked for us, sending radar measurements p-2-p at 200 Hz
As for taking apart a UDP packet, it is straightforward. You simply
declare a stream element array of the packet size and map it on the
pragma Import (Ada, A);
for A'Address use UDP_Packet'Address;
And somewhere
pragma Assert (Stream_Element'Size = 8);
just in case...
OK if the participants all have the same endianness. We used XDR (and
the translation cost is nil if the host is big-endian, as PowerPCs are;
all the critical machines were PowerPC).
Dmitry A. Kazakov
2024-02-18 13:02:32 UTC
Permalink
Post by Simon Wright
OK if the participants all have the same endianness. We used XDR (and
the translation cost is nil if the host is big-endian, as PowerPCs are;
all the critical machines were PowerPC).
I always override stream attributes and use portable formats. E.g. some
chained code for integers. Sign + exponent + normalized mantissa for
floats, again chained. That is all. There is no need in XDR, JSON, ASN.1
or other data representation mess. They are just worthless overhead.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-18 20:58:47 UTC
Permalink
Post by Dmitry A. Kazakov
There is no need in XDR, JSON, ASN.1
or other data representation mess. They are just worthless overhead.
Most languages nowadays have JSON libraries readily available. That is a
very easy format to use for passing structured data between processes.
Dmitry A. Kazakov
2024-02-18 22:10:07 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
There is no need in XDR, JSON, ASN.1
or other data representation mess. They are just worthless overhead.
Most languages nowadays have JSON libraries readily available. That is a
very easy format to use for passing structured data between processes.
It is easy to jump down the stairwell too. Though I would not recommend
such course of action...
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-18 23:44:54 UTC
Permalink
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
There is no need in XDR, JSON, ASN.1 or other data representation
mess. They are just worthless overhead.
Most languages nowadays have JSON libraries readily available. That is
a very easy format to use for passing structured data between
processes.
It is easy to jump down the stairwell too. Though I would not recommend
such course of action...
Fun fact: you can prove any argument just by coming up with a suitably
spurious analogy. For example, your argument is wrong, just by virtue of
the fact that cats land on their feet.
Dmitry A. Kazakov
2024-02-19 08:32:42 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
There is no need in XDR, JSON, ASN.1 or other data representation
mess. They are just worthless overhead.
Most languages nowadays have JSON libraries readily available. That is
a very easy format to use for passing structured data between
processes.
It is easy to jump down the stairwell too. Though I would not recommend
such course of action...
Fun fact: you can prove any argument just by coming up with a suitably
spurious analogy. For example, your argument is wrong, just by virtue of
the fact that cats land on their feet.
No. There is no argument as you provided none. You did not say why JSON
is needed. You said there are libraries. Yes, there are, the Simple
Components provides a JSON parser:

http://www.dmitry-kazakov.de/ada/components.htm#13.10

So what? The purpose is support of legacy protocols and interfacing
other languages. For an Ada program JSON has no use.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-20 00:41:10 UTC
Permalink
You did not say why JSON is needed.
Because it’s such a convenient meta-format, and its text basis helps with
debugging. Its popularity aids interoperability with code bases in other
languages, support by existing tools, and so on and so on.

If you didn’t know all that, you’ve been living under a coconut shell, as
we say in the old country ...
Dmitry A. Kazakov
2024-02-20 08:55:30 UTC
Permalink
Post by Lawrence D'Oliveiro
You did not say why JSON is needed.
Because it’s such a convenient meta-format,
Meta of what? How is it convenient for streaming objects? Begin with
access type. Proceed with time stamps.
Post by Lawrence D'Oliveiro
and its text basis helps with
debugging.
There is nothing to debug in implementation of stream attributes. Nor it
is helpful to debug communication logic issues because the format is
*data representation* one. It represents *data*, not objects, not
states. All vital information about the logic and state is not there. It
is in the context. This is the main reason why *all* data representation
formats are useless garbage even when binary.

Text basis helps to produce 100 to 1 overhead in payload which directly
translates into latency, network and CPU load, storage space, packet
overflows, variable-length packets where it should have been fixed,
chunked transfers, dynamic memory allocation and mess that makes a
64-core CPU to perform like an i286.

It is absolutely useless, you cannot read, browse, search real-life
gigabytes long communication logs without customized tools.

Driving a car, heating the house, browsing Internet I do not care about
the logs. The damn thing must work.
Post by Lawrence D'Oliveiro
Its popularity aids interoperability with code bases in other
languages,
A requirement does not aid anything. It is just a requirement. JSON
would not aid you in dealing with X.509 certificates. They are in ASN.1.
Post by Lawrence D'Oliveiro
support by existing tools, and so on and so on.
Lemming's argument. Everybody's jumping I am jumping too.
Post by Lawrence D'Oliveiro
If you didn’t know all that, you’ve been living under a coconut shell, as
we say in the old country ...
I am pretty much aware of data representation formats. Moreover, as you
may have noticed I implemented lots of them. Not because it is fun, but
because communication protocols is my job.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-20 19:37:19 UTC
Permalink
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
You did not say why JSON is needed.
Because it’s such a convenient meta-format,
Meta of what?
You don’t understand the concept of “meta-formats”? Maybe you prefer
“format family” or “format superclass”. Does that help make things
clearer? It is something easily specialized to become an application-
specific format, with less effort than creating the specific format from
scratch.

An earlier example is XML. Also IFF on the Commodore-Amiga, from the
1980s.

Does that help?
Dmitry A. Kazakov
2024-02-20 20:45:46 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
You did not say why JSON is needed.
Because it’s such a convenient meta-format,
Meta of what?
You don’t understand the concept of “meta-formats”?
Nope. The meaning of the word "meta" is having an object made out of
entities operating some other objects. E.g. meta-language vs object
language, metadata (data about data), logical inference vs logical
predicates.

Meta-format must represent formats rather than data. JSON is not that
thing. It is just a [bad] data representation format.
Post by Lawrence D'Oliveiro
Maybe you prefer
“format family” or “format superclass”. Does that help make things
clearer?
No. I don't care about classifications of poorly designed formats. JSON
is not a format family and a family of formats is not a meta-format.
Post by Lawrence D'Oliveiro
It is something easily specialized to become an application-
specific format, with less effort than creating the specific format from
scratch.
It is always the same format. JSON's inability to describe any
constraints does not make it *specialized*. The burden of checks is
moved to the application, the format is same. All such stupid thinks
only add overhead, additional points of failure and make designing
reasonable recovery logic impossible.

[ It keeps me wonder. The coding theory exists more than hundred years.
People are inventing square wheels made of cabbage leaves instead of
taking some short course... ]
Post by Lawrence D'Oliveiro
An earlier example is XML. Also IFF on the Commodore-Amiga, from the
1980s.
You can go back as far as to Hollerith specifications... (:-))
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-20 22:32:34 UTC
Permalink
The burden of checks is moved to the application, the format is same.
Isn’t that how all formats are implemented?
Dmitry A. Kazakov
2024-02-21 07:43:13 UTC
Permalink
Post by Lawrence D'Oliveiro
The burden of checks is moved to the application, the format is same.
Isn’t that how all formats are implemented?
There is a difference in semantics of checks. The checks below and above
the OSI level of the format are outside the scope of format. One thing
is to check a string against a database of client names and another to
check its length or validity of UTF-8 encoding.

Do you check ASCII characters? No, because they are densely encoded. If
error-correction etc is needed it is added below.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-21 19:44:33 UTC
Permalink
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
The burden of checks is moved to the application, the format is same.
Isn’t that how all formats are implemented?
There is a difference in semantics of checks.
Think of a stream of bytes as the ultimate meta-format. All extra layout
on top of that is “moved to the application”, as you say. But it just
takes more work starting from such a low level; starting from a higher
point, like JSON, reduces that work.
Dmitry A. Kazakov
2024-02-22 08:54:07 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
The burden of checks is moved to the application, the format is same.
Isn’t that how all formats are implemented?
There is a difference in semantics of checks.
Think of a stream of bytes as the ultimate meta-format.
Of course not. It is not a format it is a transport layer.
[ BTW, it is not bytes, it is octets actually ]
Post by Lawrence D'Oliveiro
All extra layout
on top of that is “moved to the application”, as you say. But it just
takes more work starting from such a low level; starting from a higher
point, like JSON, reduces that work.
Not at all. Implementation of serialization/deserialization on top of
JSON is exponentially harder than on top of an octet stream. Alone
specification of handling missing, superfluous, wrongly typed fields is
a huge work before a single line of code is written. Furthermore,

1. JSON is unable to represent basic data, like time stamps. These must
be encoded as strings accompanied with parsing and checks. Compare that
with encoding as octets.

2. JSON is not extensible in any sense. You cannot add new syntax
elements to JSON.

3. There is no any abstraction by which you could reuse JSON encoding.
I.e. this element is like that element. Repeat this by number of times
specified by that.

4. Nor JSON supports extensions objects. Compare it with Ada's extension
aggregates:

http://ada-auth.org/standards/rm12_w_tc1/html/RM-4-3-2.html#I2535

5. JSON cannot specify constraints, like value ranges, precision,
variable record fields, array bounds.

6. JSON has no means of reflection. Talking about "metas", there is no
way to convey a JSON description of an object (rather than instance =
data) to another node. Both sides must know each other prior to
communication. I don't say that dynamic binding is a good idea for
communication for for tall claims made and all immense overhead involved...

JSON is an extremely crude primitive hobbyish idea a lazy undergraduate
in horticulture might have about a communication protocol... (:-))
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-22 19:53:02 UTC
Permalink
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
Think of a stream of bytes as the ultimate meta-format.
Of course not. It is not a format it is a transport layer.
[ BTW, it is not bytes, it is octets actually ]
We normally work with bytes here. And your “transport layer” is what gets
the bytes from one place to another. So my point still stands: the
bytestream is the ultimate lowest-level meta-format.
Post by Dmitry A. Kazakov
Implementation of serialization/deserialization on top of
JSON is exponentially harder than on top of an octet stream.
We already have libraries for doing that, at least for byte streams.
That’s why it’s so much easier to build on top of those, rather than going
back to bytes every time.

As for octets--I guess you’re on your own.
Post by Dmitry A. Kazakov
Alone specification of handling missing, superfluous, wrongly typed
fields is a huge work before a single line of code is written.
All the code for that already exists, in most if not all common languages.
Post by Dmitry A. Kazakov
5. JSON cannot specify constraints, like value ranges, precision,
variable record fields, array bounds.
Those are specific to the format you are building on top of the meta-
format.
Post by Dmitry A. Kazakov
6. JSON has no means of reflection.
Again, this is specific to the format. For an example of how you can do
this, see Varlink <https://varlink.org/>.
Nioclásán Caileán Glostéir
2024-03-25 11:07:47 UTC
Permalink
Lawrence D'Oliveiro wrote:
"On Thu, 22 Feb 2024 09:54:07 +0100, Dmitry A. Kazakov wrote:
[. . .]

We normally work with bytes here. [. . .]
[. . .] So my point still stands: the
bytestream is the ultimate lowest-level meta-format."

Dear Mister Lawrence D'Oliveiro,

Bit and std_ulogic and electron are at lower levels.

"> Alone specification of handling missing, superfluous, wrongly typed
Post by Dmitry A. Kazakov
fields is a huge work before a single line of code is written.
All the code for that already exists, in most if not all common
languages."

BNF is better than all common languages.

With best regards.
Nioclásán Caileán Glostéir
Lawrence D'Oliveiro
2024-03-25 21:21:10 UTC
Permalink
Post by Nioclásán Caileán Glostéir
BNF is better than all common languages.
Go look up “Van Wijngaarden grammars”.
Dmitry A. Kazakov
2024-02-18 10:06:16 UTC
Permalink
My usage is a bit special. The writing process writes a bunch of data in
a memory buffer then requests this buffer to be transferred to another
process by way of WM_COPYDATA.
I thought Windows had pipes.
Yes it has, but very rarely used though much better designed than UNIX
pipes. See


https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createnamedpipea

In general Windows has much richer and better API regarding interprocess
communication than Linux. After all Windows NT was sort of descendant of
VMS, which was light years ahead of UNIX Sys V. In recent times Linux
improved, e.g. they added futex stuff etc. BSD is far worse than Linux
in respect of API.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-18 20:56:05 UTC
Permalink
Post by Dmitry A. Kazakov
My usage is a bit special. The writing process writes a bunch of data in
a memory buffer then requests this buffer to be transferred to another
process by way of WM_COPYDATA.
I thought Windows had pipes.
Yes it has, but very rarely used though much better designed than UNIX
pipes.
So why don’t programmers use it?
Post by Dmitry A. Kazakov
In general Windows has much richer and better API regarding interprocess
communication than Linux.
So why is it that Windows programs tend to avoid running multiple processes?
Dmitry A. Kazakov
2024-02-18 22:10:10 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
My usage is a bit special. The writing process writes a bunch of data in
a memory buffer then requests this buffer to be transferred to another
process by way of WM_COPYDATA.
I thought Windows had pipes.
Yes it has, but very rarely used though much better designed than UNIX
pipes.
So why don’t programmers use it?
There is no need in that. At least initially UNIX had a distinct
philosophy. Its essence was that if a mouse had three buttons, there
must have been three processes, one for each button. Any so minuscule
task was split into even lesser subtasks connected through pipes. I
remember a C compiler that had 5 passes and took forever to compile
hello-world. I wonder if anybody still actively uses that messy style of
piping awk, grep, sed so typical for early UNIX users.
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
In general Windows has much richer and better API regarding interprocess
communication than Linux.
So why is it that Windows programs tend to avoid running multiple processes?
Because there is no need in multiple processes most of the time. Windows
has a different philosophy and services which preclude the process orgy
so characteristic to UNIX. For example, Windows has and collects many
resources when a process dies. So you do not need a process monitoring
file locks, because there is no any. Instead you would deploy a global
mutex collected automatically. I do not say that Windows has few
processes. It is bloated beyond any reason.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-18 23:47:12 UTC
Permalink
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
I thought Windows had pipes.
Yes it has, but very rarely used though much better designed than UNIX
pipes.
So why don’t programmers use it?
There is no need in that.
It would be so much simpler to use the OS-provided facility, than having
to resort to this complicated library which is trying to wrap a stream
interface around shared-memory buffers.

At least, it would be in POSIX. No doubt the Windows API makes it more
complicated ...
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
In general Windows has much richer and better API regarding
interprocess communication than Linux.
So why is it that Windows programs tend to avoid running multiple processes?
Because there is no need in multiple processes most of the time. Windows
has a different philosophy and services which preclude the process orgy
so characteristic to UNIX. For example, Windows has and collects many
resources when a process dies. So you do not need a process monitoring
file locks, because there is no any.
Windows is the one that keeps files locked, *nix systems typically do not.
Dmitry A. Kazakov
2024-02-19 08:39:45 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
In general Windows has much richer and better API regarding
interprocess communication than Linux.
So why is it that Windows programs tend to avoid running multiple processes?
Because there is no need in multiple processes most of the time. Windows
has a different philosophy and services which preclude the process orgy
so characteristic to UNIX. For example, Windows has and collects many
resources when a process dies. So you do not need a process monitoring
file locks, because there is no any.
Windows is the one that keeps files locked, *nix systems typically do not.
Not Windows, It is the applications that have GUI died and files still
open. If you want UNIX behavior open all files for shared I/O.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-20 00:43:55 UTC
Permalink
Post by Dmitry A. Kazakov
Not Windows, It is the applications that have GUI died and files still
open.
Files do not stay open after the processes that have them open terminate
under Linux.

Windows does seem to cling to the old VMS model of keeping things locked
down, no matter how much trouble that causes ...
Björn Lundin
2024-02-19 09:24:05 UTC
Permalink
Post by Lawrence D'Oliveiro
So why is it that Windows programs tend to avoid running multiple processes?
Perhaps on win create_process is much heavier than threading compared to
unix. My suspicion only though.


MS SQL-server use it for IPC.
--
/Björn
Dmitry A. Kazakov
2024-02-19 09:46:27 UTC
Permalink
Post by Björn Lundin
Post by Lawrence D'Oliveiro
So why is it that Windows programs tend to avoid running multiple processes?
Perhaps on win create_process is much heavier than threading compared to
unix. My suspicion only though.
MS SQL-server use it for IPC.
Firefox starts a process for each tab! The next stop is placing each one
in a docker ... and, of course, HTTP for communication...

The Holly Grail of modern computing is to use each available bit and
each CPU tick for doing exactly nothing! (:-))
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Lawrence D'Oliveiro
2024-02-20 00:42:45 UTC
Permalink
Post by Dmitry A. Kazakov
Firefox starts a process for each tab!
All the web browsers do nowadays. That’s the only way to maximize
isolation of potentially hostile websites.

Does that hurt performance on Windows more than it does on Linux?
Kevin Chadwick
2024-04-02 00:21:34 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
Firefox starts a process for each tab!
All the web browsers do nowadays. That’s the only way to maximize
isolation of potentially hostile websites.
That sand boxing is because they're written in C. If they were written in
Ada then the original design would be preferrable that uses far less
memory.
--
Regards, Kc
Lawrence D'Oliveiro
2024-04-02 00:27:08 UTC
Permalink
Post by Kevin Chadwick
Post by Lawrence D'Oliveiro
Post by Dmitry A. Kazakov
Firefox starts a process for each tab!
All the web browsers do nowadays. That’s the only way to maximize
isolation of potentially hostile websites.
That sand boxing is because they're written in C. If they were written
in Ada then the original design would be preferrable that uses far less
memory.
I wouldn’t use one characteristic as an excuse for not doing the other
thing as well.
Kevin Chadwick
2024-04-02 03:27:11 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Kevin Chadwick
That sand boxing is because they're written in C. If they were written
in Ada then the original design would be preferrable that uses far less
memory.
I wouldn’t use one characteristic as an excuse for not doing the other
thing as well.
??

Are you just trolling? The js engine would need to be re-written in Ada as
well.
--
Regards, Kc
Pól Niocláſ Caileán Gloſtéir
2024-04-03 19:43:53 UTC
Permalink
Dear all,

He is not a troll.

With kind regards.
Pól Niocláſ Caileán Gloſtéir
Chris Townley
2024-04-03 22:44:12 UTC
Permalink
Post by Pól Niocláſ Caileán Gloſtéir
Dear all,
He is not a troll.
With kind regards.
Pól Niocláſ Caileán Gloſtéir
Really?
--
Chris
Lawrence D'Oliveiro
2024-02-18 20:57:23 UTC
Permalink
My usage is a bit special. The writing process writes a bunch of data in
a memory buffer then requests this buffer to be transferred to another
process by way of WM_COPYDATA.
I thought Windows had pipes.
It does,
We use it for out IPC in both Linux and Windows.
Works very well.
Why doesn’t the OP use them, then?
Björn Lundin
2024-02-19 14:59:05 UTC
Permalink
Post by Lawrence D'Oliveiro
My usage is a bit special. The writing process writes a bunch of data in
a memory buffer then requests this buffer to be transferred to another
process by way of WM_COPYDATA.
I thought Windows had pipes.
It does,
We use it for out IPC in both Linux and Windows.
Works very well.
Why doesn’t the OP use them, then?
I have no idea,

You can see them with
powershell -command [System.IO.Directory]::GetFiles(\"\\.\pipe\")
--
/Björn
Chris Townley
2024-02-19 17:01:59 UTC
Permalink
Post by Björn Lundin
Post by Lawrence D'Oliveiro
My usage is a bit special. The writing process writes a bunch of data in
a memory buffer then requests this buffer to be transferred to another
process by way of WM_COPYDATA.
I thought Windows had pipes.
It does,
We use it for out IPC in both Linux and Windows.
Works very well.
Why doesn’t the OP use them, then?
I have no idea,
You can see them with
powershell -command [System.IO.Directory]::GetFiles(\"\\.\pipe\")
Don't feed the troll
--
Chris
Loading...