Sponsor Message:
Non Aviation Forum
My Starred Topics | Profile | New Topic | Forum Index | Help | Search 
IBM Mainframe Questions  
User currently offlineAirstud From United States of America, joined Nov 2000, 2764 posts, RR: 4
Posted (2 years 10 months 3 days 5 hours ago) and read 1982 times:

So, in my current job, apparently I now have to start doing mainframe stuff. I've been in the job since May of 2010, and it's in my company's mainframe group, so getting up to speed on the mainframe is starting to look like a good idea. (So far they've had me doing cool data transmissions stuff and dorky autosys stuff   )

1)Why are there LPAR's? We have two or three mainframe boxes, and each box is divided into several LPAR's - but each of the LPAR's runs z/OS and, as far as I can tell, accesses the same datasets, DASD's, CICS regions, etc. I could understand if we want we should run z/OS in one LPAR and, say, Linux/390 in another - but if it's the same z/OS version all across, then why divide? Why are 5 nickels better than a quarter?

2)Why do we not have a webcam in the tape library? When you restore a dataset that's been farmed out to tapelib back to disk, obviously this causes the little "accessor" robot in the tape library to scoot around, do a mambo, mount the tape, slide around, and moonwalk back to the old tape's slot. This is obviously the coolest thing that happens in the course of a workday, so why don't we get to watch it? I'm told there used to be a webcam in our tape library; for whatever reason, it's out of commission now. I quit.


Pancakes are delicious.
21 replies: All unread, jump to last
 
User currently offlineozglobal From France, joined Nov 2004, 2732 posts, RR: 4
Reply 1, posted (2 years 10 months 3 days 4 hours ago) and read 1953 times:

I used to be an IBM Mainframe system programmer in my first job.

LPARS (Logical Partitions) are created by a virtualization facility called PRISM and are managed below the software level at the micro-code level. This is much more efficient virtualization than dividing resources at the normal software level. Think VM Ware, but this came out more than 25 years ago and is much more efficient. Depending on your company's workloads, having more Z/OS instances on the physical hardware may be more effective. They may be running SysPlex, connecting all the Z/OS instances together, sharing data and running in a single very high availability instance. You can achieve unparalleled availability this way, with OS's and LPARS dynamically sharing and taking over tasks in case of any faults anywhere in the Sysplex.



When all's said and done, there'll be more said than done.
User currently offlinescbriml From United Kingdom, joined Jul 2003, 12885 posts, RR: 46
Reply 2, posted (2 years 10 months 3 days 2 hours ago) and read 1923 times:
Support Airliners.net - become a First Class Member!

Mainframe? Call that a mainframe? Pah!   

When I started working for the World's Favourite Airline, way back in 1975, they had four IBM 360 series mainframes. Each had a whole 256kb of core memory in a cabinet the size of a large wadrobe. The computer room was the size of a couple of football pitches. Today, my watch probably has more processing power than those four combined.
OMG, I used to work on that!



Time flies like an arrow, but fruit flies like a banana! #44cHAMpion
User currently onlineRevelation From United States of America, joined Feb 2005, 12963 posts, RR: 25
Reply 3, posted (2 years 10 months 3 days 1 hour ago) and read 1906 times:

Quoting Airstud (Thread starter):
I could understand if we want we should run z/OS in one LPAR and, say, Linux/390 in another - but if it's the same z/OS version all across, then why divide?

Suppose you want to upgrade to a new zOS. You shift the live services to one LPAR, upgrade the 2nd LPAR, test it, switch services to the 2nd LPAR, and upgrade the first, then bring the first back online. Same goes for other major pieces of software (databases, etc).

Redundancy is good for either planned (hardware or software maintenance) or unplanned (crashes, etc) outages.

And it's great for the vendors: typically, they get to sell you two of everything!  

Haven't touched a mainframe since 1990, but once those things get into your DNA, they are damned hard to get out.

My current firm also loves redundancy, again, because you sell lots more stuff with the intent of surviving outages. In the real world it doesn't always work out that way, though. As one friend says, your backup plan is only as good as the last time you tested it. And us programmers aren't too fond of it, it makes us write lots of code to keep the N instances in sync with each other. That gets hairy in the case where the two instances aren't running the same software, as in the scenario above. Each new version of the software has to continue to implement support for each supported old version of the software, and that requires a lot of testing to make sure you got it right, and some times you don't...



Inspiration, move me brightly!
User currently offlineconnies4ever From Canada, joined Feb 2006, 4066 posts, RR: 13
Reply 4, posted (2 years 10 months 2 days 19 hours ago) and read 1838 times:

Quoting scbriml (Reply 2):
Mainframe? Call that a mainframe? Pah!

When I started working for the World's Favourite Airline, way back in 1975, they had four IBM 360 series mainframes. Each had a whole 256kb of core memory in a cabinet the size of a large wadrobe. The computer room was the size of a couple of football pitches. Today, my watch probably has more processing power than those four combined

   Agreed. In those days you really had think about how much memory to allocate for any function, often contorting yourself so that one half of a word was used for this, the other half for that. Thank God for EQUIVALENCE. Often you had to resort to Overlays.. Or, spool your part result to tape for later retrieval. And you try to tell the young people that today, and they won't believe you !  



Nostalgia isn't what it used to be.
User currently onlineRevelation From United States of America, joined Feb 2005, 12963 posts, RR: 25
Reply 5, posted (2 years 10 months 2 days 18 hours ago) and read 1822 times:

Quoting scbriml (Reply 2):

When I started working for the World's Favourite Airline, way back in 1975, they had four IBM 360 series mainframes.

Eh? In 1975, the 360 was already obsolete. The World's Favorite Airline should have moved on to the S/370-168 by then:



[Edited 2012-02-22 12:43:06]


Inspiration, move me brightly!
User currently offlineType-Rated From , joined Dec 1969, posts, RR:
Reply 6, posted (2 years 10 months 2 days 15 hours ago) and read 1764 times:

The tape machine that you are talking about, are the tapes stored in little bullet type containers? If so that's an old IBM Heirarchial Storage Device. I worked at a company for awhile between aviation stuff as a systems programmer. It operated under software called the Heirarchial Storage Manager. Facinating to watch. But I haven't even heard of one of those machines on probably 25 years or so.

User currently offlineNoWorries From United States of America, joined Oct 2006, 539 posts, RR: 1
Reply 7, posted (2 years 10 months 2 days ago) and read 1662 times:

I haven't touched an IBM mainframe in about 15 years, but back in the day, LPARs also allowed resources like CPU time and memory to be allocated by partition; gauranteeing a certain level of performance for one partition while performance problems in other logical partition don't spill over.

IBM mainframes also have the equivalent of VM ware -- it's z/VM. I think it's lost some of its luster when PR/SM came along, but it's still handy (I imagine) when there's a need to virtualize other resources (z/VM performs full virtualization of nearly all processor and peripheral functions), whereas PR/SM is more of a physical partitioning -- but I like I said, it's been a few decades, perhaps that's changed. (When I did mainframes SNA was the communications paradigm and no one had the foggiest ideas about the Internet or its protocols).


User currently offlinePITingres From United States of America, joined Dec 2007, 1163 posts, RR: 13
Reply 8, posted (2 years 10 months 1 day 21 hours ago) and read 1638 times:

Quoting Airstud (Thread starter):
Why are 5 nickels better than a quarter?

Because if one of the nickels crashes, you still have 20 cents left.  

Resource control and fault isolation are two big reasons. In addition, way back in the day, a mainframe was better at doing one thing fast than many things more slowly (very generally speaking, of course) and I gather that partitioning was one way to share hardware resources among disparate application loads.



Fly, you fools! Fly!
User currently offlineozglobal From France, joined Nov 2004, 2732 posts, RR: 4
Reply 9, posted (2 years 10 months 1 day 21 hours ago) and read 1627 times:

Quoting NoWorries (Reply 7):
IBM mainframes also have the equivalent of VM ware -- it's z/VM. I think it's lost some of its luster when PR/SM came along, but it's still handy (I imagine) when there's a need to virtualize other resources (z/VM performs full vitalization of nearly all processor and peripheral functions), whereas PR/SM is more of a physical partitioning -- but I like I said, it's been a few decades, perhaps that's changed. (When I did mainframes SNA was the communications paradigm and no one had the foggiest ideas about the Internet or its protocols).

The IBM VM OS came out in the early 70's and was developed by academics who needed to find an economical way to share an expensive centralized resource such as a mainframe. All of today VMware features where there, just 40years earlier!!!! Amazing the loss of technology that happened when mid-range servers took over the market. There are almost 2 generations of lost disciplines and know how to catch up on !!

VM OS is less efficient than PRISM as it operates at the software / OS level, whilst PRISM at the micro-code level. Also, I believe you can achieve almost all VM features equally via PRISM and more efficiently.

What's more, don't forget that a Z/OS system itself has the most sophisticated workload manager on the market. Without the need for multiple machines, physical OR virtual, you can run almost unlimited numbers of diverse workloads without one impacting the other. The only limit is the capacity of the machine, which can be massively scaled.



When all's said and done, there'll be more said than done.
User currently offlineNoWorries From United States of America, joined Oct 2006, 539 posts, RR: 1
Reply 10, posted (2 years 10 months 1 day 21 hours ago) and read 1616 times:

Quoting ozglobal (Reply 9):
There are almost 2 generations of lost disciplines and know how to catch up on !!


This makes we cringe every time I think about it -- I spent > 10 years in the bowels of CP and CMS (including Assembler and channel programming) -- an environment that facilitated really tight and elegant solutions for system resource management. IBM dd have one edge though -- they built the hardware, the hypervisor (CP), and the guests (CMS, MVS, etc.) Today we have Intel building the processor, other vendors building the integrated hardware, another vendor providing virtualization, and then of course the guests such as Windows, Linux, etc.

My "dream" retirement job is to find a z/VM site that's in need of some light programming.


User currently onlineRevelation From United States of America, joined Feb 2005, 12963 posts, RR: 25
Reply 11, posted (2 years 10 months 1 day 19 hours ago) and read 1589 times:

Quoting NoWorries (Reply 7):
IBM mainframes also have the equivalent of VM ware -- it's z/VM.

   Nope, PCs have the equivalent of CP, and it's called VMWare...

Quoting NoWorries (Reply 7):
I think it's lost some of its luster when PR/SM came along, but it's still handy (I imagine) when there's a need to virtualize other resources (z/VM performs full virtualization of nearly all processor and peripheral functions), whereas PR/SM is more of a physical partitioning -- but I like I said, it's been a few decades, perhaps that's changed.

PR/SM is basically CP implemented in microcode.

Quoting ozglobal (Reply 9):
The IBM VM OS came out in the early 70's and was developed by academics who needed to find an economical way to share an expensive centralized resource such as a mainframe.

It was implemented at IBM's Cambridge (Mass. USA) Scientific Center by IBM employees in the 1960s, some of whom I got to know in the late 1980s when I worked for IBM. I wouldn't describe these folks as academics, but clearly the project had the goal of selling computers to academia, and early on it really didn't have much success in that. To a large degree, it was a research project into what could be done with virtualization. It became wildly popular within IBM as a way to use one mainframe to host many other mainframe instances, just as VMWare is now popular for the same reason. It also did develop a loyal following at many universities, to a degree because IBM distributed source code for it.

Quoting ozglobal (Reply 9):
VM OS is less efficient than PRISM as it operates at the software / OS level, whilst PRISM at the micro-code level.

Correct.



Inspiration, move me brightly!
User currently offlinescbriml From United Kingdom, joined Jul 2003, 12885 posts, RR: 46
Reply 12, posted (2 years 10 months 1 day 18 hours ago) and read 1586 times:
Support Airliners.net - become a First Class Member!

Quoting Revelation (Reply 5):
The World's Favorite Airline should have moved on to the S/370-168 by then:

The real-time reservation system ran on 370s, but the batch processing was still running on 360s (punched card and paper tape). Oh what fun splicing paper tape on night shift!



Time flies like an arrow, but fruit flies like a banana! #44cHAMpion
User currently offlineNoWorries From United States of America, joined Oct 2006, 539 posts, RR: 1
Reply 13, posted (2 years 10 months 1 day 14 hours ago) and read 1549 times:

Quoting Revelation (Reply 11):
Nope, PCs have the equivalent of CP, and it's called VMWare...


Sure -- CP-40 and CP-67 beat them by a few decades.  
Quoting Revelation (Reply 11):
Quoting ozglobal (Reply 9):
VM OS is less efficient than PRISM as it operates at the software / OS level, whilst PRISM at the micro-code level.

Correct.


It was a while back -- but my recollection was that VM/ESA (so I assume z/VM) was very efficient -- not quite as good as good as PR/SM -- but close -- I'm thinking IBM was claiming something around 95% efficient.

Quoting Revelation (Reply 11):
PR/SM is basically CP implemented in microcode.


I'm going to guess that the PR/SM control program is just another 370/390/z type of processor running a stripped-down version of CP -- CP was so efficient there's probably not that much to be gained by dropping it into micro-code -- just my guess.


User currently offlineozglobal From France, joined Nov 2004, 2732 posts, RR: 4
Reply 14, posted (2 years 10 months 1 day 6 hours ago) and read 1513 times:

Quoting NoWorries (Reply 13):
Quoting Revelation (Reply 11):
PR/SM is basically CP implemented in microcode.


I'm going to guess that the PR/SM control program is just another 370/390/z type of processor running a stripped-down version of CP -- CP was so efficient there's probably not that much to be gained by dropping it into micro-code -- just my guess.

You don't have to guess. We're telling you, PR/SM is implemented at the micro-code level to improve performance. It was a copy of Amdahl's Multiple Domain Facility (MDF). Remember that the IBM m/f compatible machines were quite sophistocated back in the 80's. You could change the target architecture of the machine, on Amdahl, from 370/XA to 370 or 370/ESA and load a compatible operating system on the virtual machine. Amdahl came up then with the great idea of having multiple images or "Domains" active at the same time, even with different target architectures: "MDF", At the time IBM only had VM/OS which could not compete. PR/SM was a fight back.



When all's said and done, there'll be more said than done.
User currently offlineNoWorries From United States of America, joined Oct 2006, 539 posts, RR: 1
Reply 15, posted (2 years 10 months 1 day 2 hours ago) and read 1481 times:

Quoting ozglobal (Reply 14):
You don't have to guess. We're telling you, PR/SM is implemented at the micro-code level to improve performance


Got it. Microcode assists are very important for certain aspects of virtualization. IBM, Amdahl, and probably most other manufacturers have used some some form of microcode assist to speed up hot-spots in the operating system. Running logical partitions I'm sure requires many of the same kinds of microcode assists to run efficiently. My point is simply that there's quite a bit of "accounting" that goes on behind the scenes -- managing processors, scheduling I/O events, handling I/O interrupts, etc. Whatever the "thing" is that does all that work doesn't have to be berried directly in microcode, any sort of auxiliary processor running any kind of OS could do that work. I was guessing that since that type of work is very similar to what happens inside of CP, an auxiliary processor could easily be a 390 sort of engine running a stripped down version of CP.


User currently onlineRevelation From United States of America, joined Feb 2005, 12963 posts, RR: 25
Reply 16, posted (2 years 10 months 1 day 2 hours ago) and read 1472 times:

Guess it's my time to play "geezer"!

Quoting NoWorries (Reply 13):
It was a while back -- but my recollection was that VM/ESA (so I assume z/VM) was very efficient -- not quite as good as good as PR/SM -- but close -- I'm thinking IBM was claiming something around 95% efficient.

It isn't that hard to be efficient at virtualizing a 360 family machine as opposed to an x86 because the 360 does all input/output through explicit I/O instructions and channel programs, as opposed to x86 which largely uses memory accesses to do I/O. Also, the mainframe I/O devices tend to be simpler, and since the 70s they've known virtualization is going to happen so they tend to design them with virtualization in mind.

Note I said since the 70s instead of since the 60s. VM had a slow uptake. It's first killer app (i.e. one that the MBAs could see $$$ on) was using it to test multiple OSes on one mainframe. It took off like wildfire within IBM, and sooner or later customers heard about it and decided they'd like to be able to use it in a similar way (i.e. test the next version of the OS before going live with it), and for what we now call 'server consolidation' i.e. running different OSes as guests. The last part concerned the MBAs. They would rather you bought a new mainframe for each OS. However, they introduced software licensing to capture the revenue, and the general upswing in computer usage made it so enough mainframes were going to be sold anyway.

The hardest part of virtualizing a 360/370 efficiently was virtualizing virtual memory efficiently. The guest OSes did virtual memory on the models that supported it. Since memory was extremely expensive, memory was almost always oversubscribed, and the guests all had strategies for doing virtual memory efficiently. Of course, at first they didn't know that they were running on top of CP, and CP too was using virtual memory to deal with the lack of real memory. Some of the first concessions made to support virtual machines were in this space. One was the V=R guest, i.e. you could tell CP that a guest had the same virtual and real memory addresses, so it wouldn't have to virtualize memory. The other was some hooks where the guest could learn it was running on CP and disable its virtual memory code and let CP handle virtual memory for it.

There were documented cases where running on top of CP was faster than running on raw hardware, because CP's memory virtualization was much more efficient than the guest's!

Within IBM, the CMS part of VM was widely used as a relatively user-friendly OS for employees to do e-mail and documentation with (xedit, anyone?) as well as to access various in-house apps like timecard reporting, etc. We even had "social networking" via FORUMs and gateways to USENET and the Internet, all on our "green screen" terminals (well, actually, we were spoiled by having terminals that supported four colors!). Keep in mind these were the days of the Intel 80286, before Windows 3 never mind Windows 95, never mind the world-wide web and browsers, so the mainframe had a lot going for it

It was bog-standard to have a mainframe running thousands of virtual machines, each running a copy of CMS. While CMS is tiny and simple compared to Windows, it's still impressive to have pulled that off with the hardware of the time.

Quoting ozglobal (Reply 14):
PR/SM is implemented at the micro-code level to improve performance.

Indeed. CP has to do it all in software, so what it does is sets up interrupt handlers that fire every time I/O happens, and that's where it does the translation. Of course the OS is already doing virtual memory stuff, and that's all architectually visible, and it's also doing time slicing. All of the above is is much more efficient if you can do it in the microcode and avoid the interrupt from guest OS to VM.

Quoting ozglobal (Reply 14):
It was a copy of Amdahl's Multiple Domain Facility (MDF). Remember that the IBM m/f compatible machines were quite sophistocated back in the 80's. You could change the target architecture of the machine, on Amdahl, from 370/XA to 370 or 370/ESA and load a compatible operating system on the virtual machine. Amdahl came up then with the great idea of having multiple images or "Domains" active at the same time, even with different target architectures: "MDF", At the time IBM only had VM/OS which could not compete. PR/SM was a fight back.

Thanks for pointing that out. I remember it now that you mention it, but at the time I was an IBMer so I was wearing blue-shaded glasses! Luckily they did not make us junior programmers show up in blue suites, white shirts and red ties, but if I had gone into management, I would have had to start acquiring them!

I do remember the company throwing a big celebration when ESA was introduced. At lunch our cafeteria was transformed with tablecloths and real china, and steamship-style roast was served, and a dixie ragtime band was playing in the corner!

Also, later, a handfull of new instructions were added to the ESA architecture, and we all got free cake and coffee!

Gene Ahdahl is (was?) a genius, as was Seymour Cray (RIP).

Hitachi, on the other hand, was caught stealing IBM source code line for line.



Inspiration, move me brightly!
User currently offlineNoWorries From United States of America, joined Oct 2006, 539 posts, RR: 1
Reply 17, posted (2 years 10 months 1 day ago) and read 1453 times:

Quoting Revelation (Reply 16):
Guess it's my time to play "geezer"!


There's still a few of us dinosaurs trundling around. I wrote my first 370 Assembler program in 1973, the last in 1998 -- 25 years of bit twiddling at the bare metal.

Quoting Revelation (Reply 16):
It isn't that hard to be efficient at virtualizing a 360 family machine as opposed to an x86 because the 360 does all input/output through explicit I/O instructions and channel programs,


For my money, the 370/390 architecture is one of the cleanest that was ever widely used (very RISC-like) -- and as you stated, the very structured I/O architecture made it relatively easy to virtualize. Self-modifying channel programs (a la dreaded ISAM) were the only fly in the ointment -- with the advent of VM they were strongly discouraged. IIRC, by default, VM didn't even honor them.


User currently offlineozglobal From France, joined Nov 2004, 2732 posts, RR: 4
Reply 18, posted (2 years 10 months 23 hours ago) and read 1445 times:

Quoting Revelation (Reply 16):
Hitachi, on the other hand, was caught stealing IBM source code line for line.

I think you'll find that was Fujitsu.



When all's said and done, there'll be more said than done.
User currently onlineRevelation From United States of America, joined Feb 2005, 12963 posts, RR: 25
Reply 19, posted (2 years 10 months 23 hours ago) and read 1445 times:

Quoting NoWorries (Reply 15):
Whatever the "thing" is that does all that work doesn't have to be berried directly in microcode, any sort of auxiliary processor running any kind of OS could do that work. I was guessing that since that type of work is very similar to what happens inside of CP, an auxiliary processor could easily be a 390 sort of engine running a stripped down version of CP.

I did wonder how similar microcode was to the native instruction set, but I never did ask. I was told that those doing the PR/SM work did lift much of the code from the VM product and translated it to microcode, but indeed it was done with microcode on the CPU as opposed to auxiliary processors. However, there were lots of other auxiliary processors running around inside the mainframe complex of the day. I was told the channel controllers (those that implemented those self-modifying channel programs) were early versions of the RISC chips that were in the IBM RS5000 workstations.

Also, the current fad where x86 servers have 'lights-out management' via an embedded processor is old hat for mainframe folks. That too was done via early RISC chips. Current x86 servers have embedded ARM cores that do similar things. Mainframes of the 80s had the type of thermal management that is now common in the x86 world, and would "call home" whenever they were detecting out-of-temp or other error conditions. It was funny to see an IBM tech in a suite carrying a briefcase full of tools and parts show up totally unexpectedly because the mainframe called in and said it needed to be serviced.

Quoting NoWorries (Reply 17):
For my money, the 370/390 architecture is one of the cleanest that was ever widely used (very RISC-like) -- and as you stated, the very structured I/O architecture made it relatively easy to virtualize. Self-modifying channel programs (a la dreaded ISAM) were the only fly in the ointment -- with the advent of VM they were strongly discouraged. IIRC, by default, VM didn't even honor them.

I actually had a contract in the early 90s to work on the 5080 mainframe graphics terminals that were then in vogue, and was told their channel programs were self-modifying. These terminals were used by Boeing when they did the 777 using CATIA. They had made a massive investment in mainframe tech to do the 777 with CATIA. I seem to remember four or six 3090 mainframes with six CPUs each, and of course gobs of storage. However even then the trend was to move the graphics processing off the mainframe and just use the mainframe as a data farm. That's exactly what the contract I was on was about, but funding was cut before we got very far.



Inspiration, move me brightly!
User currently offlineNoWorries From United States of America, joined Oct 2006, 539 posts, RR: 1
Reply 20, posted (2 years 10 months 22 hours ago) and read 1431 times:

Quoting Revelation (Reply 19):

I did wonder how similar microcode was to the native instruction set, but I never did ask. I was told that those doing the PR/SM work did lift much of the code from the VM product and translated it to microcode, but indeed it was done with microcode on the CPU as opposed to auxiliary processors. However, there were lots of other auxiliary processors running around inside the mainframe complex of the day. I was told the channel controllers (those that implemented those self-modifying channel programs) were early versions of the RISC chips that were in the IBM RS5000 workstations.


I remember in the 308x/3090 days how difficult it was to do the IOCP gen -- I'd heard various rumors that there were either RS6000s or 3080/3090 type processors under the covers. I'm wondering if the z/ series still does the IOCP gen. Since all z/ processors come with PR/SM now, maybe they could just roll all the I/O management into PR/SM.


User currently onlineRevelation From United States of America, joined Feb 2005, 12963 posts, RR: 25
Reply 21, posted (2 years 10 months 21 hours ago) and read 1421 times:

Quoting ozglobal (Reply 18):
Quoting Revelation (Reply 16):
Hitachi, on the other hand, was caught stealing IBM source code line for line.

I think you'll find that was Fujitsu.

Oops, my bad... I guess I should have done my homework first!

Quoting NoWorries (Reply 20):
I remember in the 308x/3090 days how difficult it was to do the IOCP gen -- I'd heard various rumors that there were either RS6000s or 3080/3090 type processors under the covers.
http://en.wikipedia.org/wiki/IBM_801 says:

Quote:

The 801 architecture was used in a variety of IBM devices including channel controllers for their S/370 mainframes, various networking devices, and eventually the IBM 9370 mainframe core itself.

In the early 1980s the lessons learned on the 801 were put back into the new America Project, which led to the IBM POWER architecture and the RS/6000 deskside scientific microcomputer.

... so I guess that part of my memory wasn't as faulty ...

Quoting NoWorries (Reply 20):
I'm wondering if the z/ series still does the IOCP gen. Since all z/ processors come with PR/SM now, maybe they could just roll all the I/O management into PR/SM.

No idea. I never got involved in that aspect of things.

The last thing I worked on at IBM was AIX/ESA, a *nix for mainframes that could run native or as a guest of VM. It was only about 15 years ahead of its time. As you can see, it did ship, but shortly thereafter ended up in the dumpster. Nowadays I see you can get Linux for the z series processors. The market is probably about the same i.e. tiny. To be honest, there probably were more reasons to want it back in the 90s that today, but I don't really know how mainframe folks think.



Inspiration, move me brightly!
Top Of Page
Forum Index

This topic is archived and can not be replied to any more.

Printer friendly format

Similar topics:More similar topics...
Howard Faces Hicks, Shoes And Questions On Q&A posted Mon Oct 25 2010 07:32:24 by Baroque
Jury Duty Questions posted Tue Oct 12 2010 14:28:51 by dragon-wings
Questions On Religion posted Fri Oct 1 2010 09:35:53 by seb146
Selective Service System - Questions posted Sun Aug 22 2010 11:53:27 by AmricanShamrok
Questions For Christians posted Tue Jul 20 2010 10:03:08 by UAL747
US B2 Visit/Non-Immigrant Visa--Some Questions posted Mon Jun 21 2010 05:07:42 by deltaownsall
English Language Questions For Non-native Speakers posted Thu Mar 18 2010 14:31:20 by BNAOWB
Itunes/Ipod Questions.... - Little Help!? posted Sun Mar 14 2010 21:04:37 by varigb707
Least Favorite Job Interview Questions posted Mon Feb 1 2010 11:20:09 by Airstud
KSM Trial ... Questions And Rhetoric posted Mon Feb 1 2010 11:06:32 by AGM100