The CVE-2020-0601 vulnerability marks the first time when Microsoft credited the NSA for reporting a bug

  • 🔧 Actively working on site again.
Uh, I updated to Windows 10 1903 yesterday and it told me the connection is not private. Now I woke up, got to my computer, tested it again, and now it'ts telling me "Hello World". So, am I compromised, yes or not?
 
If this RCE could have granted NT AUTHORITY\SYSTEM, do you think the NSA would have disclosed it at all, or do you think they would have kept it in their arsenal like they did with Eternalblue/DoublePulsar?
I agree, this is probably more a vulnerability replacement update than any kind of fix. Still a NOBUS'ed hole means there is only one party using it instead of literally every agency and criminal cartel out there.
 
Yeah my windows is up to date and yet the desktop brave is giving that error I assume it's something to do with the built in adblocking.


So after restarting a few time and making sure I didn't need anymore updates, I'm now getting a Cert error on Brave, Chrome, and Firefox on my PC. So i'm going to guess it was a false positive. Retested Brave on Android and got a Cert error too, so seems like shit's as stable as it'll get.
 
Read the NSA report and check the curves as they mention. That's going to be the easiest way for you to see if there's something that doesn't belong. Btw, these two?

Code:
 Oakley-EC2N-3:
    IPSec/IKE/Oakley curve #3 over a 155 bit binary field.
    Not suitable for ECDSA.
    Questionable extension field!
  Oakley-EC2N-4:
    IPSec/IKE/Oakley curve #4 over a 185 bit binary field.
    Not suitable for ECDSA.
    Questionable extension field!

They are apparently fine regardless of what they say as of my current knowledge of the situation.

You can check what comes up on your end against this list from MS as well if you're not sure if it's normal: https://docs.microsoft.com/en-us/uwp/api/windows.security.cryptography.core.ecccurvenames

And here is some information about what you're dealing with: https://safecurves.cr.yp.to/
 
It actually has nothing to do with the OS version, 64 bit versions of Windows don't have the components necessary to run 16 bit apps, but 32 bit versions do. So 32 bit Windows 10 actually can run 16 bit apps, on the other hand 64 bit XP has many of the same troubles Vista+ tends to have.
It's not that the components are missing. It's that the processor can't run real mode code while in 64-bit long mode.
 
It's not that the components are missing. It's that the processor can't run real mode code while in 64-bit long mode.
I'll admit I'm not very technically savvy, so my understanding's probably a bit off so feel free to correct me, but my understanding is when they made the move to 32 bit they made sure 16 bit programs worked via emulating necessary processes for 16 bit apps. However when they moved to 64 bit they didn't port that stuff over, so 16 bit programs don't work in 64 bit versions of Windows as a result. As far as I understand it there's no reason it couldn't be ported over or remade if they wanted to keep 16 bit compatibility, they just didn't do it. That's why I said it's missing components, 64 bit can't run them natively, but you can emulate the necessary processes. That's what they did back in the 32 bit era, and the same thing's done with 32 bit programs on 64 bit systems.
 
I personally run KDE Neon and Windows 10 on both my desktop and laptop. Both OSes on each device. I use KDE Neon for just about every task, and Windows for games on my desktop, and less than 1% of my work on my laptop.

There's two things to bear in mind. If you can't figure out how to use Linux, you're probably a dumb boomer, and should probably get off the Internet. The second thing to bear in mind is that Linux is basically held together with duct tape and Windows will always be better for games. Thus, I recommend the dual booting I do.

Don't be an idiot, and you won't have security issues, but don't expect Linux to actually be useful for a lot of important tasks that work on Windows.
 
It's not that the components are missing. It's that the processor can't run real mode code while in 64-bit long mode.
Yeah. That always seemed weird. Also the shitty fact that retaining 16 bit capability in x86 cripples 64 bit by banking memory and it actually being something like 48 bit memory space.

...can't dosbox run 16 bit dos programs on 64 bit systems? Does it pad the instructions? Or is it an emulation thing?
 
I'll admit I'm not very technically savvy, so my understanding's probably a bit off so feel free to correct me, but my understanding is when they made the move to 32 bit they made sure 16 bit programs worked via emulating necessary processes for 16 bit apps. However when they moved to 64 bit they didn't port that stuff over, so 16 bit programs don't work in 64 bit versions of Windows as a result. As far as I understand it there's no reason it couldn't be ported over or remade if they wanted to keep 16 bit compatibility, they just didn't do it. That's why I said it's missing components, 64 bit can't run them natively, but you can emulate the necessary processes. That's what they did back in the 32 bit era, and the same thing's done with 32 bit programs on 64 bit systems.
First some terminology (rev up those autistic ratings):

Real mode - the execution mode of all x86 processors when they start up. This is a 16-bit mode without flat memory, paging, or memory protection. All addressing is done via segment registers. This is what the original 8086 used.

Protected mode - the execution mode of x86 processors starting the 80386 (technically it started with the 286 but it didn't really work so most people just used the 286 in real mode). This is a 32-bit mode with flat memory, paging, and memory protection. Processes are real (as in, the MMU exists) and all memory access takes place in a virtual memory map that prevents processes from overwriting each other's memory.

Long mode - introduced with AMD's 64-bit extensions to x86. Like protected mode, this mode uses flat memory, paging, and memory protection.

When they moved to 32-bit, Intel and AMD added a virtualization subsystem called virtual 8086 mode that could be accessed within protected mode. This essentially creates an isolated virtual 8086 execution context (as a process) that can run real mode (16-bit) code with only a small shim in the host operating system. This virtual 8086 processor mode is not available in long mode (64-bit) and so real mode software cannot be run directly on the processor.

32-bit software running on 64-bit is actually straightforward. Long mode has compatibility features for 32-bit software. Userland code does not need to be modified. The only thing that the OS needs to do to support 32-bit software is provide a shim that dynamically translates data structures passed in to system calls from 32-bit to 64-bit, passes that data to the equivalent 64-bit system call, and then translates that 64-bit data structure back into a 32-bit one once the system call returns.

The reason why 16-bit is harder to do than 32-bit is because the transition from 16-bit to 32-bit in x86 land also marked the transition to a completely new execution and memory model. In contrast, 64-bit mode is pretty much just 32-bit mode with more general-purpose registers.

EDIT: I suppose I should add a caveat. A lot of 16-bit protected code mode exists (remember me mentioning the 286?). In fact, most of the 16-bit Windows code is protected mode and so should be technically capable of being run on modern 64-bit Windows. Microsoft doesn't support it because why would they. No one is still using Windows software from before Win95 that hasn't been updated to support at least Win95.

Yeah. That always seemed weird. Also the shitty fact that retaining 16 bit capability in x86 cripples 64 bit by banking memory and it actually being something like 48 bit memory space.

...can't dosbox run 16 bit dos programs on 64 bit systems? Does it pad the instructions? Or is it an emulation thing?

16-bit compat wouldn't have affected 64-bit performance much at all (since it was just being virtualized in 32-bit procs anyway). The main reason it was dropped was because of the added complexity of having to support switching per-process from 64-bit to 32-bit to 16-bit. It would have greatly complicated chip designs at the time and so it was decided that it should be left out.

As for DOSBox, it's a full emulator. It emulates something resembling a 386 CPU.
 
Last edited:
First some terminology (rev up those autistic ratings):

Real mode - the execution mode of all x86 processors when they start up. This is a 16-bit mode without flat memory, paging, or memory protection. All addressing is done via segment registers. This is what the original 8086 used.

Protected mode - the execution mode of x86 processors starting the 80386 (technically it started with the 286 but it didn't really work so most people just used the 286 in real mode). This is a 32-bit mode with flat memory, paging, and memory protection. Processes are real (as in, the MMU exists) and all memory access takes place in a virtual memory map that prevents processes from overwriting each other's memory.

Long mode - introduced with AMD's 64-bit extensions to x86. Like protected mode, this mode uses flat memory, paging, and memory protection.

When they moved to 32-bit, Intel and AMD added a virtualization subsystem called virtual 8086 mode that could be accessed within protected mode. This essentially creates an isolated virtual 8086 execution context (as a process) that can run real mode (16-bit) code with only a small shim in the host operating system. This virtual 8086 processor mode is not available in long mode (64-bit) and so real mode software cannot be run directly on the processor.

32-bit software running on 64-bit is actually straightforward.

Well it doesn't sound that straightforward to me. Like as soon as you brought up x86 processor modes. All that brings up is more questions. Whats a "mode" precisely? Whats flat memory, paging and memory protection exactly? Who decided it wasn't needed originally? Who decided it would have to be implemented in the later processor generations? Why?

Like I'm barely one sentence in and you've lost me. It feels like I'd have to go to university just to learn the history of programming architecture, before actually getting into the programming part.
 
Like I'm barely one sentence in and you've lost me. It feels like I'd have to go to university just to learn the history of programming architecture, before actually getting into the programming part.
You would essentially have to have taken a computer architecture course (and maybe an operating systems course too) to understand a lot of it. The point I was trying to make with my autistic post was that 64-bit Windows not being able to execute 16-bit code isn't a software problem but a hardware one. It's not something Microsoft explicitly chose to disallow but something they were forced to disallow by the designers of the processor architecture.
 
  • Informative
Reactions: AsbestosFlaygon
Well it doesn't sound that straightforward to me. Like as soon as you brought up x86 processor modes. All that brings up is more questions. Whats a "mode" precisely? Whats flat memory, paging and memory protection exactly? Who decided it wasn't needed originally? Who decided it would have to be implemented in the later processor generations? Why?

Like I'm barely one sentence in and you've lost me. It feels like I'd have to go to university just to learn the history of programming architecture, before actually getting into the programming part.
It made more sense if you were going through it at the time. Obviously other architectures had their own growing pains, but the x86 situation was particularly messy. The 8080 processors the 8086 was intended to supplant had already introduced 16-bit memory addressing to allow access to a full 64 KiB of memory, but Intel could clearly see that in the future, systems running larger or multiple programs would need to access more than that.

But, there was already a whole bunch of software written for those older 8080 and compatible systems, and the likes of Intel and IBM could see that it was a no-brainer to ensure that more than 64 KiB could be addressed. So they kept the internal instructions and their own assembly language for the chip similar enough to allow ready translation by hand or by specialised conversion program, and they designed the architecture to allow addressing up to about as much memory as you could with 20-bit memory addressing, but with a particular form of what's called segmented addressing.

Essentially, in real mode, programs continue to use 16-bit memory addresses, but internally, rather than just using the address that is specified, the CPU will look at a value held in one of a few 16 bit segment registrars, 'shift it left' by four bits (multiplying it by 16), and add the two together. In this way, an old application written in assembly for an 8080 or Z80 system could be migrated over to the new architecture relatively unchanged, and be given access to its own full 64 KiB block of memory.

The cool thing about this is that you could be running a modern OS like IBM PC/MS-DOS, load up any special drivers that you need for your fancy new hardware like mouses, load up 'TSR' programs that could do things like provide a popup calculator or monitor for virus-type activity, and the OS could still give an old migrated CPM application the full 64 KiB to play with, likely more than it would have ever had on the original target hardware.

And, because of that rather weird way that the memory was addressed, with the segment registrars allowing access to a 64 KiB block of memory starting from the bottom to the top of memory at 16 byte intervals (which also meant that a single point in memory rather than having just one memory address could be accessed with 2^16/2^4 (4096) combinations of segment registrar and regular address), wastage of memory was kept to a minimum.

But, new applications developed for DOS could fairly easily take as much as 64 KiB for their application code and 64 KiB for data in memory, and with some complex assembly programming (or just using a compiler) use even hundreds of KiB.

Bear in mind that the original IBM PC was offered with 16 KiB of memory as the base configuration.

Of course this was all very confusing as opposed to just having wider memory addresses and there were heaps of other confusing details involved in the actual implementation details of the hardware IBM would use in the PC, XT, and successive units, but it dealt with the simplest use case (<64 KiB program & data memory use) just fine, and Intel hardware engineers contented themselves with the fact that really really really smart hardware engineers could do cool tricks if they needed to use more.

The articles linked from https://en.wikipedia.org/wiki/Template:X86_processor_modes and to a lesser extent https://en.wikipedia.org/wiki/X86_memory_segmentation are actually reasonably good explanations of the various matters involved.

Note that I am a drooling retard so most of the above may be incorrect in whole or in part.

It's not that the components are missing. It's that the processor can't run real mode code while in 64-bit long mode.
I've been reading a bit about this, and apparently Intel VT-x re-adds support for the old virtual 8086 mode that was used for real-mode emulation in Windows 3/9x/DOSEMU in Linux. The latest DOSEMU version apparently makes use of this, so MS could probably support running old 16-bit Windows code, but noone would pay them for the privilege.
 
  • Like
Reactions: Vecr and Fcret
Note that I am a drooling exceptional individual so most of the above may be incorrect in whole or in part.
Nah, you're pretty much right on the money. The real problem is that explaining the issue requires explaining ~30 years of processor development and no matter what you do, it's going to sound autistic.
 
Back