Larabee/Knights Landing is actually a very interesting bit of hardware even if it's not a GPU. 'Lots of Atom cores rather than few Core ones' is a bit radical but it might pay off in HPC or servers. And maybe in the long run games will use thread pools and be scalable to this sort of hardware.
What makes this stuff interesting, even if you're too busy to actually play games is that you've got a bunch of massive companies hiring very smart people and all competing hard. And still, they quite often miss what looks like an open goal. It's not like CPUs where both AMD and Intel have their strengths and both produce 'good enough' chips for a couple of hundred bucks.
I would have liked to see what could have come out Larrabee, a bunch of x86-cores with some ROPs/TMUs at either end sounds like what Kutaragi imagined cell would have been like.
Talisman is another old GPU I'm still curious about, it seemed to be some kind of tile based (deferred) renderer that Microsoft was developing at one point, cancelled in 1999 I think. That was when they had big plans on how they would push 3D for Windows. WinG was memoryholed, the wet dogshit that was Direct3D was drying up nicely, they had DirectEngine in the works at MonoLith so why not a GPU? Their advantage would be the API and drivers, native D3D from day one, instead of Glide, PVGL, S3 MeTaL and all the other ones that existed during those 2-3 years. They were aiming to provide the platform(windows), API(DirectX), software(VS/DXSDK/DirectEngine) and hardware(Talisman).
With all that in mind the Xbox wasn't much of a surprise, it was in line with their current and previous efforts.
Later they were kicking around "DirectPhysics" to have a default/optional physics integration into DX, that mostly solved itself.
The 10 series chips with Iris Plus graphics are better than the previous chips, but they won't hit 60 FPS at 1080p with low settings in Rise of the Tomb Raider or Far Cry 5. Which would irritate the hell out of me if I wanted to play those games.
https://www.pcmag.com/article/369884/intel-ice-lake-benchmarked-how-10nm-cpus-could-bring-majo
Iris is decent enough for what it is but it's part of Intel's premium priced chips meaning that a user willing to pay that price, even in the desktop space, will end up with something they won't use because it's not competitive with a $100 card.
As far as I know it largely exists because Intel had tons of fabrication capacity at a previous node and they knew that they could crank out tons of eDRAM and use it as cache on a CPU/IGPU made at a different process node.