Virtual IMS User Group Sponsors

BMC Software

Virtual IMS User Group | October 2023

System z (Mainframe) as the Enterprise Information Server

Stan Muse
Consulting Services Account Manager
Edge Solutions and Consulting

Read the Transcription

[00:00:00] – Slide 1 : Title Slide

Hello, this is Stan Muse from Edge Solutions and Consulting. On October 10, 2023, speaking to the Virtual IMS Users Group. Today we will be talking about providing mainframe information Db2 IMS and VSAM to AI applications and machine learning by building a dedicated z/OS Enterprise Information Server LPAR for Information as a Service.

[00:00:31.012] – Slide 2: Agenda

Here’s the agenda for our presentation today. We’ll begin by talking about some business and technical challenges facing several industries. My orientation is the finance industry, but it really applies to all industries.

We’ll talk about the AI imperative and why companies across industries need to move to implement AI to enhance their operations. We’ll talk about a solution overview of what I’m going to be presenting, business justification and some reasons for doing the project. We’ll talk about a few use cases. Then I’m going to go into a more technical part of the presentation on the System Z technical architecture and LPARs. I’ll talk about implementation steps for implementing data sharing for Db2 IMS and VSAM.

We’ll close with a summary and question and answer session. I’m going to be providing this presentation as a PowerPoint PDF and wave files for downloading, and I will be recording the session on each slide. So you’ll see a little speaker at the top right. If you want to go back and you have the PPT file, you can click on that speaker and get the audio. And feel free to distribute this presentation throughout your organization or any of your friends that you would like to distribute it to and use it in any way you would like.

[00:01:55] – Slide 3: A Few Business and Technical Challenges

Thank you. As I said earlier, my orientation is the finance industry, and I’ve worked with a number of banking clients, both medium sized and large. And some of the business challenges facing those clients are reduced deposits, loan losses and rising interest rates due to inflation, decreased risk from their customers. Banks are getting sued left and right because of the people they do business with enabling those businesses. So it’s important to know your customer.

We’re going to be seeing heightened political and government regulatory scrutiny, especially when it comes to AI. There’s all kinds of sessions in Congress right now to talk about how they’re going to regulate AI so that it’s fair. We’re also facing people costs, rising wages and benefits, employee retention problems and equity problems. As far as technical problems or challenges, the AI revolution really is in full swing. I mean, you can’t turn on the TV without seeing someone talking about – at least the business channels – talking about AI.

And companies are racing to implement the solution to cut costs, add value. And this is going to present many new technical challenges like cloud managed services, increasing complexity, the implementation of distributed computing models from monolithic – which to me means everything is in one place – to macro services where applications are spread all over the place. Security obviously is a huge challenge. Core systems modernization and then turning down the legacy systems or retiring their legacy systems. The distributed data management, it’s really data that’s all over the place and we’re seeing many new unstructured databases types other than VSAM, IMS, Db2 which we’re going to be talking about today. And implementation of new revolutionary technologies like analytics, machine learning and AI, really are not going to be optional for companies that are going to survive. Larry Summers, who’s a pretty smart guy, I don’t agree with everything he says, but I do agree with this, “AI will be the most important technological development, invention of Fire and the wheel.” Really, the purpose of AI to me is to make us all at least seem a lot smarter.

[00:04:30] – Slide 4: The AI Imperative

Let’s talk about the AI imperative. AI and machine learning will be implemented and heavily used by most successful companies. I want to stress that successful companies is therefore unavoidable, really, for a company to exist. It’ll be very disruptive for the job market. Just as data processing was. The first jobs that were eliminated by data processing were many thousands of accountants. I know because I was one of these. I worked at the controller’s office for restricted funds at Emory University, and I was one of these guys over here with the adding machine and the 13 columnar pad, and that’s how we ran the business.

So those jobs really have, for the most part, gone away. We still have people looking over the results, but the typical accountant job has really gone away. As well as the telco operators. When was the last time anyone spoke to operator on the telephone? Secretaries went away. We pretty much do that ourselves now.Gas station attendants are gone. Gas station pumps now accepts your credit card and you do all the work. And even the bank tellers, for the most part, have been eliminated by ATMs. Used to be we counted the money when we got the money out of the ATM. Now we just stick it in our wallet and trust it to be correct.

The modern digital age really would not have been possible without these advances in data processing. The same will be true of AI. AI and machine learning and robotics will eliminate and enhance many jobs at all levels at the same time. The combination of AI and robotics will be even more disruptive and life enhancing and really is unstoppable. To illustrate that, to the right you see the Mars rover. AI and robotics are now on another planet, there’s no way we’re going to stop it here on Earth. AI and machine learning have the potential to vastly improve our lives while also reducing waste. No longer do we keep a city street map in our car. We have our telephones Bluetooth to our car, and we just type in the address of where we want to go. And Christine takes us there. And we trust her not to become self aware and send us where the hills have eyes. This is called generative AI because it produces the results in several different formats. You see a map on your console, you even see your position on the map, and it’s taking you on the shortest route with the least traffic, and even the least police, to your destination. Sometimes you do go around the block, but you do finally get there.

And who doesn’t use tax accounting software? I can’t remember the last time I really tried to do my own taxes, because the tax accounting software knows the questions to ask to get the most deductions so that I pay the government the least amount of money. But AI and machine learning is only as good as the data it has access to for producing the results. In the case of Google Maps, it has access to a lot of data, not only all the routes, but satellite data coming in that shows your position on the map and data coming in from other drivers to tell you where all the traffic is and where the police are. So AI and machine learning are going to require a lot more data access than we were used to in the past.

Now, it can also be used to do harm by some bad actors like Boris and Natasha over there. And we can expect many new regulations coming from politicians who really know nothing about the technology. They got their position because they won a popularity contest by paying a lot of money, but they really don’t know anything about technology. The challenge for us is how will we force AI to be legally and socially compliant with those new regulations they’re going to be coming out with, which are typically very vague, without sacrificing valid results.

As far as expectations, implementation and widespread use will be disruptive but unavoidable to survive in the new digital era. It will vastly improve our lives while reducing wasted time, money, energy pollution, et cetera, and will provide better than human-only results. It will require much more computing power than transaction processing does now. And that’s really the point of this presentation. We’re going to require much more computing power than we’re doing today with transaction processing and with a lot more data access. Transaction processing is really a record of a time processing. AI is going to require access to data sets at a time. So much more data access and much more computing power.

The technology can be used to do harm, but we can’t fear it. It’s not going to become self aware, and we can’t avoid AI.

[00:09:48] – Slide 5: Enterprise Data Management Challenges Today

As I mentioned, AI is going to need access to vast quantities of data, and the results of those routines are only going to be as good as the data that they had access to produce those results. For decades, clients have been moving data from the mainframe for analysis and providing services. Over on the right, you see a really pretty picture of ETL multi-process steps where data is taken off the source systems moved into data warehouses, data marts, onto data lakes, data boat houses, or whatever they call them, on and on. And this really adds no value to the data because you’re just moving it around. Daily ETL volumes have grown from gigabytes to terabytes to petabytes, and ETL actually increases mainframe workload and costs.

I’ve worked with many clients that thought that they were going to reduce the amount of mainframe workload by ETL’ing the data off and doing the work someplace else. But what they’ve actually found is they actually increase the mainframe workload because moving all that data every day adds a lot of workload. It is often aggregated into a new schema like a star or snowflake schema, and it loses meaning when it’s summarized and aggregated like this away from the granular level that it was created at. Data duplication is really costly in terms of storage, staff, software licenses, et cetera. And C-level officers really can’t quantify these costs in terms of total cost of ownership.

They’re just waking up to the new digital age demands for data for AI and other analytics. Costly platforms like Teradata, Exodata and other boutique systems consume a lot of the IT growth budget, so we’re spending money on that instead of new, “useful”, work. And the cost for cloud platforms like Snowflake, MongoDB and others are really open-ended, but we’ll see what those are once we get past the initial teaser startup rates for those systems.

Data latency is also a huge problem for many applications, like fraud detection and stock trading. There week-old data is really just not useful for these applications. Losses continue to grow and they produce many false positives which are really irritating for customers when their transactions are declined. Data security is compromised with thousands of ETL feeds to who knows where. They say you can’t hack a mainframe, but who needs to the lack of distributed data security results in expensive data breaches. Everybody can recall the Equifax data breach that exposed half the country’s data. And hackers have many access points for those data breaches and ransomware attacks. And data encryption is really difficult, if not impossible to manage over many copies of data on several platforms.

The enterprise data architecture really has become very complex or unmanageable, and unsustainable server sprawl very costly and is a common problem with runaway unquantifiable costs. So clients are really struggling with how to provide mainframe Information as a Service. And this is key to the success of customer analytics, machine learning, artificial intelligence and many other new strategic solutions. As a result, enterprise data management and data access have become IT’s most costly, most visible problems in the new digital age.

[00:13:34] – Slide 6: Solution: Use the System of Record data in place with System Z Enterprise Information Server

So the solution we’re proposing today is to use the data, the system of record data that is, where it’s created in place System Z as the Enterprise Information Server. The goal is to provide a seamless, primarily read-only, near real-time access to core systems data in place. We’ll allocate a separate z/OS LPAR as an enterprise-wide information server and direct the new AIML traffic to that LPAR and stop expensive data proliferation to specialty servers into the cloud. This is known as data gravity. We’ll move the analysis to the data, so we’ll not only be using that LPAR to access the data, we can also run applications that use that data in that LPAR. This will provide a single point of entry for the data, especially for external users, which is very important.

The picture on the right shows the growth path for Z16. Z16 is delivered in standard 19-inch racks, start with a one-frame system, and grow all the way to a four-frame system. And even a five-frame system can be specially ordered. Your IBM rep would be glad to give you a presentation on the growth path and the architecture of the Z16. Z16 provides really an unlimited growth path for what we’re talking about.

Unlike distributed server LPARs, which must be contained to one pizza box with the processors on that pizza box, an LPAR on Z16 can span all of the frames. So it really does provide an unlimited growth path for the Enterprise Information Server, as well as all of your other LPARs that you’re running on Z today. So the purpose is to establish System Z as the corporate data server. Eliminating a lot of other data servers for cognitive computing, AI, machine learning, analytics, and a lot of different kinds of reporting will provide access for mobile and distributed banking applications. 90% of those requests from those applications are read-only transactions like Balance Lookup data access for distributed or cloud applications can be provided, as well as running fraud and fraud detection and Analysis. From Edge we could help with your project by providing templates for the operational model, project plan, and Quick Start services. The operational model provides a minimum configuration for hardware, software, tools and connectivity, and the project implementation plan includes staffing and education plan because there’s some new technologies here which your staff may not be familiar with.

Provide z/OS as well as Db2 IMS and VSAM services for building a z/OS Sysplex and for data sharing. And we’ll even help you with a cost benefit analysis which is going to be crucial. And this will give you negotiation points for future IBM purchases to read-only, non-revenue producing, workload pricing. They’re looking at this. They’ve been trying to figure out how to do this from SMF Records, but it’s really not possible from SMF Records to understand whether a transaction is read-only or not. It’s just not recorded there. But IBM would like to be able to do that, similar to what they do for devtest systems or DR systems, provide special pricing for those systems because they don’t run your core applications.

By doing that, IBM can lower your total cost of ownership for building this new LPAR and really provide better service from IBM.

[00:17:36] – Slide 7: Business Justification for zEIS

As with any new major project, we’re going to have to provide a business justification / cost benefits analysis for the project. We’re going to do this by providing reduced complexity, lower costs, improving your security, and reducing risk. We’ll reduce complexity by providing a simple data architecture to make the mainframe data easier to access and understand, and providing one copy of the data in place with one set of metadata for all users to access and understand. We can lower costs by eliminating potentially thousands of distributed servers and their software and support costs, eliminating the duplicate data costs and data privacy costs, and hopefully eliminate a lot of data breaches. And stop paying for ETL processing and buying expensive ETL tools and the training that goes along with those.

As I mentioned, IBM is considering special hardware and software pricing for contained read-only workloads. Now, we’re not guaranteeing this, this is something we have to negotiate with IBM. Just want to mention here that Edge is not a remarketer for IBM hardware and software. All negotiations for those have to be done with IBM. You can almost justify the project on improved data security alone. We’ll provide a single entry point and pervasive encryption on that data across the entire set of data. We’ll be able to monitor data usage. Most data breaches come from internal and some do come from external.

We’ll be able to monitor data usage for ransomware attacks. Also, if we detect one, we can shut down total access to the LPAR and stop it. We’ll have the ability to create a chargeback system for data usage, for the digital bank or for any institution for Information as a Service client which might provide an additional revenue stream to the corporation. And overall better data governance. Data is growing exponentially and most clients are really not ready for it.

Last point, we want to reduce risk. Now this is really important because it’s going to give us a place to try new solutions without impacting production systems. I’ve talked with some clients about this and one or two of them said “We can do all of this in our current production systems.” And my reaction is, “Well, you might try that, but not for long because the data usage patterns are so different from your transaction systems, it will impact those. You’ll be flushing your buffers all the time.” So this will give us a place to try new stuff, ad hoc query stuff, and AI routines are really untested, so you wouldn’t want to put those in one of your current production transaction system LPARs.

This would help to avoid long mainframe production system change control cycles for new data centric solutions, and help us to become more agile for delivering new strategic solutions like ML and AI and enable agile analytic development. I’ve been on a few consulting engagements with large clients who told me up front that they really didn’t need a business case because the project had already been improved, only to get to the end of the project or near the end, and executives demand to see the business case for it. So a business case will be required, and this will not be inexpensive and will need buy-in from the business units and executives management. So do not try to skip this task. This must be done upfront or as the project goes along in order to receive approval for funding in the end for the project.

[00:21:32] – Slide 8: Top 10 Reasons for allocating a Dedicated z/OS LPAR as an EIS

For the next two slides, I want to give my top ten reasons for allocating the dedicated z/OS LPAR Enterprise Information Server (EIS). Now, these aren’t really in any order, but the first one I want to talk about is Access Isolation. This is most often overlooked by most companies, and it’s easier to manage access to only one LPAR instead of point solutions in every LPAR that she’s sort of shown over here on the right with that confusing picture. It eases application programming, interface management, and eliminates confusion about the best access solution. If you do point-to-point architecture in all the LPARs, when somebody leaves, this is normally not well documented and that knowledge leaves and nobody knows what it is anymore. This will help everyone by using one copy of the data for one version of the truth. I’ve been in many business meetings where there’s a lot of confusion, a lot of time wasted over what data is correct. This will provide one copy of the data and one version of the truth for that data. We’ll also provide better security with one single place from which to monitor the data usage patterns and provide alerts for unusual data access. It’ll also be easier to shut down in case of detected denial of service, ransomware attacks or data corruption attacks.

Another important reason is Workload Isolation. The informational access workload is growing rapidly and will become larger than the transaction and batch workloads today. Isolating this workload will protect core transaction and batch applications from informational access, running the business from runaway ad hoc query costs. Isolating the workload will allow for better hardware capacity planning, performance and tuning for all workloads, including the read-only workloads and corruption detection. It will be easier to understand where the requests are coming from for planning on-premise cloud or mobile applications.

Number three we can provide Better Hardware Resource Allocation and Usage. Dedicated cores like central processors, zIP engines and specialty engines for compression and encryption without transactional interference are better for informational workload performance. We can use spare core capacity and also it can be allocated at a lower priority, allowing for overall higher throughput.

Dedicated memory for separate IMS, Db2, and VSAM buffers can be provided for better response time for both informational and transactional systems because those data access patterns are very different and they’ll need their own set of buffers.

Communications traffic can be routed through dedicated open systems adapter cards for no interference with current production applications, and we can provide faster access for Linux on Z and other Linux applications through the use of Hyper Sockets or OSA cards instead of going across the network.

Number four we can provide an Information Uses Chargeback system. Today, most companies don’t have this. I don’t know of any corporation that really does have a chargeback system for informational access. This will be a requirement for Total Cost of Ownership or ROI analysis. Having a chargeback system outside for B2B access key core data could be a new revenue stream for the business and could transform the mainframe from a cost center to a revenue center.

Number five is Infrastructure Simplification. Having only one access for the information request greatly simplifies and should reduce the support cost versus point-to-point solutions. It’ll be easy to understand the infrastructure which aids in agile development, API management and news RestFul services for business units. Provides a central place for access to the enterprise metadata for understanding the data from a business user standpoint, most clients have thousands of distributed servers and each server, each distributed server, is the equivalent of at least one LPAR. It has a hypervisor on it, it has many LPAR, so allocating a new LPAR should be nothing new for most clients. Thousands of distributed servers could be potentially eliminated by just one or two zLPAR and you could collapse an entire server farm onto one Z16 as shown on the right.

[00:26:34] – Slide 9: Top 10 Reasons for allocating a Dedicated z/OS LPAR as an EIS

Continuing the top ten reasons for the project. Number six z/OS analytics Tools Availability. Many analytics tools and AI will run well under z/OS or on Linux on Z. And these new analytics tools and AI tools may not be allowed to run in the production transaction LPAR because of their data access patterns.

API Dashboards and Know Your Customer analytics tools could also be enabled to all enterprise data and cloud data, not just the data on Z, but data in the cloud or on distributed servers.

Number seven. The key to using the data is the ability to virtualize and federate the data for real-time analytics. Near real-time analytics or access is required for many new applications from mainframe and distributed servers for fraud detection and corruption prevention. All data can be federated or joined, cleansed, summarized, presented to remote requester, reducing the amount of data transmitted. Key to this is going to be the first bullet item here in Data Virtualization Manager and we’re going to go through how to install that and use it. Also listed for some other federation tools provided from IBM and there are others from other vendors. A lot of the ETL work could be eliminated with direct data access.

Number eight we can build an active core banking or enterprise applications environment. Many customers run everything and smaller customers at least run all production systems in one LPAR. This would give us a data sharing capability so that if we had to take down that production LPAR, we would at least have informational read-only access to the data.

Number nine the ability to reduce costs and take advantage of IBM special pricing for hardware and software for read-only systems. IBM already provides software container pricing for the solution and new workflow which greatly reduces the monthly license charges. This would have to be negotiated directly with IBM or the business partner, the IBM business partner that remarkets, the hardware and software.

And finally, number ten the ability to respond quickly to regulators and auditors. GDPR is not just for the EU, it’s global. If you have an EU client as one of your clients, then it applies to you too. US firms must be able to quickly respond to auditors and regulators requests. The ability to show data flows from the system of record or the source system to the final target database and user is required. If you have to defend yourself in court, it will be easier to do from one place instead of having to track the data from the source to all of the data warehouses, data marts, data lakes, whatever, we can provide a consistent data obfuscation and masking for compliance from a single point of control. Key to that would be something like opt-in data privacy.

[00:29:54] – Slide 10: A Few zEIS Use Cases

Key to justifying the project will be coming up with some use cases for the business. I’ve listed here about a dozen use cases. You can read over these and see which ones apply to you, or you might come up with some new ones. But take some of these use cases back to the business, see if it applies to them and see if they would like to do it, and that will help you justify the project.

[00:30:19] – Slide 11: Typical Mainframe Production Data Sharing HA Conceptual Model

Hopefully now everyone sees the value in implementing a Z Enterprise Information Server LPAR. So now we’ll move on to the more technical portion of the presentation. Shown here is the typical mainframe production data sharing and high availability conceptual model. Most sophisticated mainframe clients already have at least two production LPARs in an active data sharing environment for high availability plus a remote site DR capability.

But you’d be surprised at how many clients still run all their production in one production LPAR with even devtests sharing production data for testing. This conceptual model is what I see most often implemented in at least the larger regional banks and insurance companies. Production systems are run in one z/OS LPAR, LPAR1 joined by a coupling facility to LPAR2, which is a high availability LPAR. This is called a data sharing Sysplex or GDPS Metro Mirror.

Typically, the two LPARs are in two separate machines on the same floor, at least in the same campus, and joined by a coupling facility which handles all the locking structures and data buffer structures. The coupling facility is its own LPAR and can be run on either one of the two machines shown here. Or it may be run in a separate smaller machine. Then something like Copy Services Manager or geographically distributed parallel Sysplex is implemented to maintain the data in the disaster recovery site. GDPS can be implemented with automated scripts for startup of all the systems needed to recover all of the work done in the production LPAR and typically the disaster recovery site has standby processors enough to take over all of that workload. In this configuration, one copy of the production data is maintained across the two production systems.

In the event that the primary production LPAR, LPAR1, fails, then the system can be failed over to the High Availability machine and just keep on running if the primary site is lost, in other words, the primary production, LPAR1 and LPAR2, are gone, then the system can fail over to the disaster recovery site using GDPs Scripts. Rrecently, clients have started implementing Logical Copy Protection or CyberVault for immutable copies that are air-gapped for recovery of systems in case of corruption or ransomware attacks or things like that. These are copies of last resort. You really don’t want to ever have to implement that because it takes a long time, because you have to first determine that the logical copies are valid and non corrupted themselves. Because one customer went like six months where they were having corruption, and of course, all of their logical copies were also corrupted that they had made.

If they had a zEIS LPAR implemented. They could be constantly running routines to determine whether that corruption had been occurring and catch it while it’s just now starting, which is really what you want to do. You don’t want to ever have to fall back on these protected copies.

[00:34:31] – Slide 12: zEIS LPAR Conceptual Model

This busy chart shows that same conceptual model for primary production and HA in a data sharing environment with the new Z Enterprise Information Server LPAR introduced. The new zEIS LPAR could run on one of those existing two machines. Or it could be run on its own separate machine. It could also use one of the existing coupling facilities or have its own new coupling facility along with it. Key here is we still have only one copy of the data in a data sharing Sysplex environment. Applications in either of the two production LPARs or coming in from distributed systems could access the data on the new zEIS LPAR. The data could also reside on a Db2 Analytics Accelerator – either a standalone machine or with a new version7 can be built internally to SystemZ or Linux One with dedicated IFLs. Take advantage of the geographically distributed parallel Sysplex environment for remote recovery.

[00:35:26] – Slide 13: Expanding the Architecture with IDAA and Linux on Z

This chart expands on that architecture by introducing the Db2 Analytics Accelerator and a Linux on Z environment. There’s still only one copy of the data shared across all these systems and the new Db2 Analytics Accelerator runs on its own special version of Red Hat Linux, primarily on IFLs (Integrated Facility for Linux), which are special purpose processors. The near-real-time data is replicated from Db2 to the IDAA and can be loaded from VSAM or IMS, or loaded from other sources. The Db2 Analytics Accelerator has a feature called a High-Performance Storage Saver feature, which can house the historical Db2 data and allow you to shrink the size of the primary production Db2 data, but still keep it available for query.

Then to the far right we show a new Linux LPAR with the Hypervisor and Linux run on IFLs with Linux Virtual Machines. And there you can roll your own applications, AI, ML applications, analytics applications, or other types of applications that you want to that run under Linux. Distributed servers could talk to that Linux LPAR through the OSA cards and have access to all the data from the zEIS LPAR. This greatly simplifies the typical infrastructure I’ve seen because all of this can be run in one or two boxes and all of the LPARs and data can be backed up at the DR site.

A lot of times when I talk to the z/OS people and ask them what do you do about recovering the Distributed servers at the DR site? They say, “Well, that’s their problem, that’s not ours. We’re just in control of the mainframe, we just backed that up.” But I’ve never seen a company that could actually run without the distributed servers available for their front ends.

Finally, much of the Distributed Server farm could be collapsed and consolidated into the Linux on Z environment, greatly simplifying the whole infrastructure.

[00:38:07] – Slide 14: CF, zEIS LPAR, IMS, Db2, and VSAM Data Sharing Implementation Steps

Next we’ll talk about the implementation steps for setting up the coupling facility with zEIS LPAR and IMS Db2 and VSAM data sharing groups and finally installing Data Virtualization Manager (DVM) Builder. We’ll take these in as first allocating the coupling facility LPAR, if you don’t already have one, and then allocating the zEIS LPAR on the Hardware Management console, setting up the data sharing routes for IMS, Db2 and VSAM, and installing DVM for accessing and joining all data via SQL. And this is not just the IMS Db2 and VSAM data, this is virtually any data source, pretty much that you have on Z or on Distributed Systems that may be in the cloud. And then we’ll talk about connecting query and AI tools for access via DVM. Just to note the IMS folks, there’s two kinds of sharing database – level sharing and block level sharing – nobody I know uses database level sharing, although it does allow one read write LPAR and all the other LPARs are read-only. Everyone uses block level data sharing. That’s what I would advise. And that will probably require the implementation of GDPS. And Edge Consulting has performed over 50 of these engagements for mainframe customers and typically does all of the GDPS work for IBM.

[00:39:24] – Slide 15: Coupling Facility Implementation Steps

In order to create a data sharing Sysplex, you’re going to need coupling facilities. Most installations use more than one coupling facility for a data sharing Sysplex for High Availability reasons. Coupling facility is an LPAR that can reside on the standalone machine or could run on your production machine or the High Availability machine if you have one of those. My recommendation would be to use an existing coupling facility, or facilities, if possible. The next step is to review the coupling facility sizing and requirements. If it’s a processing workload, we’re going to be having minimal locks for the zEIS LPAR. There will be data buffer considerations, especially Db2 since it keeps the group buffer pool in the coupling facility. So we’re going to need to look at structure sizing.

Next, we’ll plan for and format and define the coupled data sets, both the primary and alternate. These include the Sysplex coupled data sets, the function coupled data sets. We’ll need you to do some coupled data set sizing analysis, and then we’ll set policies for the Coupling Facility Resource Manager, the Automated Restart Manager, the System Failure Manager, and Workload Manager policies.

Next we’ll plan the common timing reference through server time protocol. Then we’ll plan the cross-coupling facility signal path. And finally we’ll perform the HMC actions on the hardware management console to define the Coupling Facility LPAR image, to verify the image, and then to activate the LPAR image. I wanted to show the keystrokes for this, but they were just too numerous and they’re fully documented in one of the books down there on the right “Defining a Coupling Facility” – the IBM online documentation.

Then we’ll review the relevant SYS1.PARMLIB members to make sure everything is correct. And finally, all the systems must be IPL’d in order for the new LPAR to join the parallel Sysplex in data-sharing mode. Now, defining a CF is something that very few people actually get to do, and that will be really restricted in most shops as to who can do that, but Edge Consulting can help with that if needed.

[00:42:01] – Slide 16: Allocate the new zEIS LPAR Implementation Steps

Next, we’ll talk about allocating the new zEIS LPAR. This is going to be very similar to allocating the coupling facility LPAR with a few minor differences.

First, we’ll determine the initial LPAR hardware requirements. Are we going to install this on an existing CEC or on a new machine? The network communication requirements, the DASD requirements initially, and the performance and processor requirements. And we’re going to need at least one general purpose processor because aspects of z/OS that we put any COBOL out there, they’re going to run on general purpose processors and then we’ll allocate some zIPs. Now, even though workload is zIP eligible, not all of it will run on zIPs.

IBM only allows a percentage, and they really don’t tell you what that is, of zIP eligible workload to run on zIPs and then the rest runs on general purpose processors. And if zIPs aren’t available, then it falls back to the GP. Yeah, this is going to take a lot of planning and preparation before we build the LPAR, we have to consider SYSRES and master catalog all these other subsystems and review the members of the new SYS1.PARMLIB. Then we’ll define the new system parameters in the hardware control data set, and we’ll create the new I/O definition file.

Then we’ll perform the actions necessary on the Hardware Management Console to define the LPAR image. And again, we’re going to have at least one general purpose and several zIPs, we’ll load the new I/O definition file to the support element, creating the IOCDS. Then we’ll activate the LPAR image. Then finally we’ll IPL the new LPAR.

Step number five, we’re going to install or clone all the needed software like subsystems for IMS, Db2, CICs, any ISV products, Tools, and Data Virtualization Manager, which we’re going to talk about next, and which is really key to tying the whole project, all the data together. At the bottom right there’s more information for designing the LPAR tool to help you do that. And then relevant online documentation from IBM on how to build the LPAR and all of the keystrokes in the Hardware Management Console.

[00:44:32] – Slide 17: zEIS LPAR IMS Data Sharing Implementation Steps

Next, we’ll talk about IMS data sharing implementation for the zEIS LPAR. This LPAR will be implementing data sharing in read-only mode, and we’re going to assume that the first IMS production subsystem has already been established in production.

Now there’s database-level sharing and block-level data sharing. No one really uses database level sharing, although it does allow for one read write system and everyone else’s read-only. But we’ll be using block label data sharing (BLDS). That’s what everyone does, and the IRLM will be required.

Now we want to determine which IMS subsystems and databases will be shared for the new LPAR. Block-level data sharing allows for sharing of HIDAM, Partition HIDAM, HDAM/Partition HDAM, ISAM, SHISAM, secondary indexes, DEDBs, HSAM, so any of those types of databases can be shared with BLDS. Main storage databases in GSAM cannot be shared. We’re going to register the databases that share Level 3, which allows for multiple LPARs and IRLMs to share the databases. Again, we’re going to define these databases for this LPAR as read-only. So we’ll define the access as RO for read without integrity or RD for read with integrity for read access to all the databases.

On the PSB, we’ll use PROCOPT=GO for Dirty Read or G for Read with Integrity or GOT if we want to retry and go out to disk in case we’re locked out. For the IRLM, we’ll use SCOPE=GLOBAL, and there are several parameters to customize, like DEADLOK and TRACE and LOCKTIME. So we’ll start the database with the Start DB command with GLOBALACCESS=RO or RD, whichever you choose. DBALLOC NOBACKOUT OPEN. Use VSAMSHAREOPTIONS(33) and DISP=SHR (Disposition equals Share) to allow DFSMS to be started in multiple host environments with GRS. We’ll define the coupling facility structures and the CFRM policy and the CFNAMES statement. It will allow for three coupling facility structure names which must be passed IMS, the IRLM, OSAM, and VSAM structures.

On the bottom right you see more information about data sharing with IMS and parallel Sysplex. The last bullet there is an IMS data sharing course from Credly. It’s free and it takes about 12 hours. It’s a really good course, and I recommend it.

[00:47:40] – Slide 17: Db2 Data Sharing Implementation Steps

Next, we’ll talk about Db2 data sharing implementation steps for the new zEIS LPAR. Db2 also uses the IRLM to manage its lock structures, which it keeps in the coupling facility. It also keeps a group buffer pool there, as well as local buffer pools in each of the other LPARs. So we’re going to assume that the first production LPAR is already there and up and running with Db2 databases. We’re going to add the new zEIS LPAR as a data sharing member. So we’re going to run the CLIST for adding a new member, and that tailors 63 different DSN type jobs for your environment.

Once we verify that all of these DSN jobs, these 63 of them have been tailored correctly, we’ll run them to add the new Db2 data sharing member to IRLM and MVS. Then we’ll initialize the system data sets, the bootstrap data set, and define user authorization exit routines if we need them. It will record Db2 to SMF and establish subsystem security with Db2 security and also who can access the databases via RACF or any other security software you have. Optionally. We’ll connect IMS or CICs to Db2. Connect Db2 to TSO and define Db2 to z/OS. After you run the DSNTIJMV job, which you must define Db2 in the SYS1.PARMLIB. then start the Db2 subsystem and optionally create an image copy of the Db2 directory and catalog. Verify that enabling data sharing was successful. Ensure that the workload manager address space IMS defined and available. Again there’s more information on the bottom right for Db2 data sharing and installing Db2. Let’s talk about VSAM data sharing implementation steps for the new LPAR.

[00:49:50] – Slide 18: zEIS LPAR VSAM Data Sharing Implementation Steps

VSAM works very similar to IMS and Db2, with the coupling facility keeping the lot structures and buffer structures. In order for VSAM to participate in a data sharing Sysplex, it must employ VSAM Record Level Sharing, or RLS. You’re going to assume that the first VSAM production system is already established, so we’re going to use the same VSAM interfaces and data format. VSAM file types that can be shared are Key Sequence Data Sets (KSDS), Entry Sequence Data Sets (ESDS), Relative Record Data Sets (RRDS), and Variable Linked Relative record Data Sets (VLRDS). It must be SMS managed, and there can only be one server per MVS or z/OS LPAR image.

The access mode is specified on the Access Control Block (ACB) on the JCL, and we’ll be using Record Level Locking in the coupling facility so serialization is at the record level rather than at the CI level, which is also kind of some people call the block. We’ll define the VSAM data sets as recoverable. Since this is a read-only system for vSAM, we’ll define it as Read Only (GET) sharing across the batch jobs and ARs. We’ll specify record level sharing access in the ACB or on the JCL. We’ll open with input with Read No Read Integrity option if you want to read without integrity or Open for input with Read Integrity.

Consistent Read (CR) for completed updates only. Do not use “repeatable read” or Consistent Read Explicit (CRE). Due to all the locking concerns and deadlocks that we may run into, we’re only trying to read the data. Finally, we’d review the SYS1.PARMLIBs and make any changes that are needed there. There’s a lot more information at the bottom right under more information as far as Red Books and presentations.

[00:52:14] – Slide 19: IBM Data Virtualization Manager Implementation Steps

IBM Data Virtualization Manager will be critical to the success of the project. It will be used for accessing all the data, bringing it together, joining it, and making sense out of it, as well as providing a metadata catalog for it. DVM can access data in a couple of different ways. You can go directly against the IMS VSAM/OSAM data sets and Db2 VSAM/OSAM data sets or VSAM VSAM data sets and access the data directly that way. Of course, that’s going to be a “dirty read” because it completely ignores locking. IMS Direct has to be enabled through a parameter and a configuration member for IMS Direct access. I’m recommending that you don’t use that that you do use DB Control or ODBA to provide a callable interface for IMS so that it uses it the normal way we think about for accessing IMS and Db2 data. That way it will use the locking facilities provided in the whole data sharing Sysplex environment. DVM runs under DVS and therefore can run in the Enterprise Information Server LPAR to access all the data. There’s more information below. There’s even an installed demo shown there.

[00:53:36] – Slide 20: IBM Data Virtualization Manager Benefits for AI/ML

Let’s talk about some of the benefits for using DVM for AOT. DVM provides access to relational and non-relational IBM z data through modern APIs, including HTTP, SOAP, SQL and NoSQL and Rest when combined with z/OS Connect. Secondly, and this is really important, the type of work we’re going to be doing with AI provides direct, real-time read and write access to relational and traditional nonrelational Z sources. So this gives you the ability to save your answer set used by the AI routine back to Db2 for AI accountability. You’ll need to be able to do this, be able to show where the data came from that was used for that routine, in that instance, if you’re ever challenged in court. Mainframe databases include, but aren’t limited to Db2, IMS, VSAM, IMS database Oracle, any other type of relational database or flat file. As far as security, DVM exploits IBMz security features such as pervasive encryption and other security protocols including RACF, ACF2 and Top Secret, and provides native database level security within Db2. DVM is very efficient at what it does and very scalable, and runs mostly on zIIP processors. And it does this while combining data from many structured and unstructured data sources. In the picture, I showed DVM providing a joined answer set from data obtained from IMS, Db2, VSAM, Snowflake, Azure, Mongo, and it provides us as one answer set as a data stream for AI and Machine Learning. Then a header record can be put on that joined answer set and written back to or saved back to Db2 in a file there, in a database there for providing the sourcing and use of the data.

[00:56:00] – Slide 21: First zEIS Project: POC Steps

Let’s talk about some first POC steps for the project. First thing you want to do is understand your company pain points and get an executive sponsor from your CIO or business units. Try to understand where they’re spending a lot of time and money managing data, moving data around, and really not getting results they need.

Next present the zEIS LPAR concept and socialize it. Try to gain a commitment to proceed, and Edge can provide a free workshop on that. Or you could play back this presentation for folks. Finally, identify a business unit sponsored proof of concept and document the criteria for success and gain a commitment to move forward. You could start by using one of your test system LPARs for a POC with Data Virtualization Manager or query tool.

You could allocate a small z/OS zEIS LPAR with maybe one CP and a couple of zIIPs and a terabyte of memory just to start with. Then below you see about a dozen zEIS LPAR proof of concept projects that you might try. One of the first ones you might look at is building the entire z/OS EIS LPAR and Linux environment on Z development and test system using the parallel sysplex condition. This runs under Red Hat Linux on X86, so you could build the whole thing on something like a ThinkPad just to do the POC to start with. Again, Edge can provide project management, architecture, help z/OS Sysplex programming, and database services for driving success for the project.

On the bottom right you see more information about zD&T and zD&T demos are also listed there.

[00:58:10] – Slide 22: Summary

So, in summary, AI could change just about everything and all industries must move quickly and embrace AI to move into a new digital data business model. Keeping in mind that AI routines are only as good as the data they have access to. Still true that about 90% of financial services customer data originates on System z. Over 90% of financial services core applications workload still runs on Z.

Now that workload over 90% is still read-only. Many financial services clients run all their core applications on a single z/OS LPAR, which is what I call the mainframe Middle Ages. While I have thousands of distributed servers with many LPARs ETL’ing data daily from the mainframe. The mainframe’s strength is in data management, and it’s “shared everything” architecture. It shares, processors, the I/O subsystem, huge memories and connectivity. New digital age data distribution workload along with AI will be as big or bigger than the current batch and transactional applications work core operational systems do today and must be enabled to serve the digital workload while being isolated and protected.

Most companies may have no choice but to build a z Enterprise Information Server LPAR if they run the mainframe today or build it on some other platform if they don’t run a mainframe. They must be able to protect and recreate all the data used to produce AI results. And finally, building the z Information Server LPAR may be the key to survival for many financial services and other industry clients. The good news is that all of the technologies and platforms that we’ve talked about so far today are all available and in use in many industries. And you may have the in-house resources in terms of people and processes and machines to do the project yourself. But again, if you don’t, Edge Consulting can assist with all aspects of building the zEIS LPAR. We’d start with a short scoping engagement for the project to determine just exactly what you want to do. On the bottom right, I’ve listed three of my favorite AI YouTube presentations for you to take a look at.

[01:00:16] – Slide 23: References

Well, there’s certainly a lot to know and learn, and on this page I’ll list some of the references that I used to put this presentation together for your benefit.

[01:00:28] – Slide 24 and 25: Trademarks and Thank You for Attending!

And finally, the lawyers require us to show this trademark page. Finally, I want to thank everyone for attending today, and please don’t be shy about emailing me for any follow up questions or help with your project. I really believe this project for building the Enterprise Information Server LPAR is crucial for your business and will really determine whether a lot of businesses are successful or not.

Upcoming Virtual IMS Meetings

April 9, 2024

Virtual IMS User Group Meeting

Steve Nathan
IBM

June 11, 2024

Virtual IMS User Group Meeting

August 13, 2024

Virtual IMS User Group Meeting