Virtual IMS User Group Sponsors

BMC Software

Virtual IMS User Group | December 2023

IMS Connect Reimagined: Leveraging SQL to Access IMS Data in Today’s Digital Ecosystem

Santosh Dorge
Lead Product Developer
BMC Software

Read the Transcription

[00:01:28] – Amanda Hendley, Planet Mainframe (Host)
Great.  And if everyone, I’ll just check and make sure that you are muted for our presentation. We’ll have q a after our presentation so you could unmute at that time. And then I’m also going to ask for your feedback toward the end today. So that might be a good opportunity for you to unmute if you are in a setting where you can. But first, let me introduce myself. My name is Amanda Hendley. I am managing editor at Planet Mainframe, and I have the great privilege to host the virtual user groups for IMS Db two and kicks. And so today we’re here for the IMS virtual user group. So welcome. I hope you can all see my screen. If you can’t, or having some other issues, you can drop a note in chat and we’ll see that. But welcome again. So, for today, I have a brief agenda to share with you. We’re going to get into the presentation very shortly, so plenty of time to hear from Santosh. But today, just some brief introductory remarks, then the presentation.

 

[00:02:57] – Amanda Hendley, Planet Mainframe (Host)
Q A, as always, we’ll talk about news and articles and what’s coming soon. But before we jump in, I want to thank our partners, BMC and Planet Mainframe, for their support of the user groups. They are what keeps us going. So, thank you. And then I wanted to let you know if you hadn’t noticed anything, although at this point, I hope you’ve all gotten notifications about the Arcadi mainframe yearbook. This is the de facto manual about what’s going on in mainframe that is released every January. So we are looking for a couple of things today from you. There is an annual mainframe user survey, and we have just a few more days to get responses in for it. So if you would go to this QR code or the Itech ed Arcadi page and complete the mainframe user survey. It’s intense. It’s about 15 minutes when we’ve timed it. But the benefit on the other end, besides getting your own copy of all of the survey results, is that we are also going to do a drawing for some Bose quiet comfort headphones, those really nice plush headphones. We’re giving away three pairs of them.

 

[00:04:24] – Amanda Hendley, Planet Mainframe (Host)
So please fill out the annual user survey so you have your chance at winning a pair of those headphones. And so you share with us and with the community what’s going on in your shop in regards to mainframe. There’s a lot of great questions about your planning, what you’re planning to do, how you’re spending money, and what the talent side is looking like for you. So we really want your feedback there. If you’re interested at all, there is free directory inclusion. So the Arcadi mainframe yearbook has a directory in it of all of the different vendors and mainframe resource providers. So that is something you’re going to want to make sure that your company is listed in. And then there’s also sponsorship opportunities like these companies listed here that are sponsoring the Arcadi mainframe yearbook. So I hope I’ve talked about that long enough for you all to pull out your phone and scan the QR code or go to tinyurl.com Arcadi and plan on filling out that survey today. Maybe we’ll get done with a little bit of extra time that you’ve locked off for this meeting, and you can do it now. All right, so now that I’ve done my plug for the Arcati Mainframe Yearbook, I want to welcome our speaker.

 

[00:05:46] – Amanda Hendley, Planet Mainframe (Host)
So, Santosh George is a lead project developer for BMC software, and he’s talking today about IMS Connect reimagined, leveraging SQL to access IMS data in today’s digital. I think I’m missing something. I apologize. But let me introduce you jacintosh. He is a mainframe product developer, and he has a passion for harnessing the power of legacy systems to drive modern performance, modern technological advances. He’s a well known speaker in the space. You’ve probably seen him present before, but he’s bringing to us almost 20 years experience in IMS and mainframe. So this should be a really great session. With that, Santosh, I’m going to stop sharing and let you take over.

 

[00:06:34] – Santosh Dorge, BMC (Guest Speaker)
Thank you, Amanda. Hope you guys can see the screen.

 

[00:06:48] – Amanda Hendley, Planet Mainframe (Host)
That looks great. Thank you.

 

[00:06:50] – Santosh Dorge, BMC (Guest Speaker)
Thank you. Today’s topic, IMS Connect Reimagined, leveraging SQL to access IMS data in today’s digital ecosystem. I’ll be speaking about the modernizing IMS Connect to support newer technologies and architectures, architectures such as cloud computing, Restful APIs, and microservices. Then we can dive into setting up the environments optimized for Java and python programming techniques, ensuring smooth interactions with the IMS databases. Then I’ll speak about events and traces that can be captured to diagnose the problems empowering IMS Connect environments with the robust tools for efficient troubleshooting and performance tuning. So before we begin, before we begin about myself my name is Santosh Dorge. I am lead product developer in the PMC software and located in Pune, India. It’s 10:00 p.m. Here. At the moment I’m not sure I shall be saying good morning, good afternoon and maybe good evening. People are joining from the global locations here in the virtual meeting today. I work on BMC AMI data for IMS products so that’s about me.  

 

[00:09:12] – Santosh Dorge, BMC (Guest Speaker)
Today’s Topics: we’re going to speak about the IMS Connect overview. For those who are new to IMS Connect I’m going to speak a little bit about IMS Connect word view and the address space. Then I’m going to speak about access to the IMS transactions from the web technologies TCP/IP, then the message formats for the input request messages and then message flow and the problem diagnosis in the transaction access area. Then we’ll be touching to the database access to the IMS data, access to the IMS using SQL using distributed data management protocol. Then we’re going to discuss about the input request message routing for the ODBM messages. Then I’ll be speaking about the events and traces common challenges throughout the development, testing and the production phases with the IMS Connect and the OTMA ODBM interfaces. Then lastly we can discuss about evolving the IMS Connect environments to keep innovating in the IMS Connect area. We can speak about data scrapping for the analytics and deriving the insights for the artificial intelligence. So that’s the list of topics coming up in the current presentation here.

 

[00:10:45] – Santosh Dorge, BMC (Guest Speaker)
So IMS Connect is the address space, separate address space in the IMS. It’s a component of the IBM IMS family of the products. IMS Connect serves as a gateway that allows applications to interact with the IMS databases and transactions using standard communication protocols like TCP IP and HTTP. So this enables seamless integration of IMS based applications with the modern technologies and provides access to the IMS transactions and the data from wide range of platform and the devices. In the picture you can see here the applications or the interfaces like data power, Universal DB resource adapter driver, IMS Connect API clients or the SOAP gateway. An IMS TM resource adapter can communicate with the IMS Connect address space here with the TCP IP and then we have the PMCMe energizer for IMS Connect tightly coupled into the IMS Connect that provides that facility to capture, traces and empowers the IMS Connect environment and whole address space. Communicates with the IMS OTMA or the XCap to get the information or execute the IMS transactions without modifying the mainframe side of the system or the environment. Key benefits with the BMC AMI Energizer for IMS Connect are like better availability through the dynamic changes and does not need an outage. Even if there are changes needed to the environment like changing the routing or adding a new data store to the IMS Connect environment, it can be done dynamically and improved productivity by eliminating the need for coding the assembler message user exits. You can use the virtual exits. You don’t need to maintain the IMS Connect user exits and code the assembler exits. That thing can be eliminated with the standard features that are provided with the virtual exit and you can use the traces and the journal data sets for diagnosing problems. You can use the graphical user interface to see the analytics out of events occurring in the IMS Connect address.

 

[00:14:08] – Santosh Dorge, BMC (Guest Speaker)
to So that’s about IMS Connect address space then if we are speaking about the transaction access over the TCP IP, IMS Connect facilitates the transaction access through the OTMA. One need not to modify the existing resources on the mainframe side. They can just start building the applications on the open system side and start communicating with the IMS Connect start use the existing transactions and information can be used in the open system side applications. The responses it allows the application to interact with and retrieve the data from IMS databases. This capability is crucial for enabling the distributed applications running on various platforms to efficiently access the data stored on the IMS database without modifying the programs running for transaction access from different terminals. So that’s the transaction access.

 

[00:15:30] – Santosh Dorge, BMC (Guest Speaker)
Let me show you this here. It supports for the growth of evolving technologies, as I said, without modifying applications in the mainframe using open transaction management access. Then OTMA uses the cross coupling facility to send and receive the messages. Then you can have one IMS Connect to connect to the multiple IMS control regions in multiple XIP groups. One IMS control region can connect to the multiple IMS Connects. So it’s like one to one. One too many or many to one connections can happen between the IMS control region and IMS Connect address place IMS Connect and IMS can be on different LPARS in the same sysplex.

 

[00:16:39] – Santosh Dorge, BMC (Guest Speaker)
So this is the picture shows the sample configuration or the environment to enable the IMS transaction access. On the zOS side, one can have zOS connect address space established in order to define or in order to create the APIs rest. APIs based Java and Python applications or the cloud applications can access the APIs in turn that will be sent to the IMS Connect and IMS Connect will get information from OTMA. It can run the transaction in the NPR, access the data from the IMS database and back to the Jivos Connect API application. Alternatively, one can create the applications using IMSTM resource adapter or the port level connections socket connections connecting to the IMS Connect port and on the same path to the transaction manager executing the transaction in the IMS and returning response to the client created outside the mainframe system. So IMS Connect provides the exit routines or the mechanism to modify the behavior of the IMS Connect. You can evaluate the input message coming from the application to the IMS Connect and you can code the behavior how the IMS Connect will work on that particular input message and send it to the OTMA. At the same time you can change the output behavior of the IMS Connect for the output message to the client. So input message is of the format like there would be the full world length and then IRM 28 bit fixed length IRM and then LLZZDATA ending with the string 00040000. It can use something called extended IRM and it is of format this or alternatively it can use the pre-built OTMA headers like shown here. OTMA can understand the information only in terms of OTMA headers. So applications developed as Java APIs or as the TCP IP port connection here they send the information in the format shown here and ultimately IMS Connect converts that information in the OTMA header which can be processed by OTMA in order to execute the transaction in the transaction manager and MPR.

 

[00:20:14] – Santosh Dorge, BMC (Guest Speaker)
Okay, this is the configuration for the transaction access. The one can define the proc for the IMS Connect which is program being executed in the IMS Connect procedure and there is parameter SWS CFG that parameters points to the procliv member where one has to code the TCP IP parameter and the data store parameter that enables the transaction access to the IMS data store. So these are the parameters that are defined in the DAPSPB member of the procliv. These values here shall match to the one in the DFSPB member of the proclip. Again then data store is IMS datastore id group is except group member is the value in the DFSPB member Apple shall match to that of the Apple ID defined in the DFSPB and then IMS flex member equal to the name of the IMS Connect address space, and the team member is the name of the IMS flex here. Then you can define the multiple data store IDsIDsIDs here. These are the data store IDs that are defined as the applications can send information to these data store ids here.

 

[00:22:35] – Santosh Dorge, BMC (Guest Speaker)
All right, so this picture here shows the sample code for the transaction access using Java program the Java program can be part of the application outside the mainframe, it can be anything, it can be programmed in the cloud, it can be program that provides the microservices, or it can be program in the cloud applications. So as I showed on the previous slides, the one way is to create the IRM in the format of LLZ data and send it to the IMS Connect port. It can connect to the host and port number and it will send the information to the IMS Connect socket which will be continuously reading the information received on that socket and will send it to the exits and from there it will go to the OTMA. Again will follow the same path wherein from OTMA it will be scheduled in the message processing region and get the response back to the client. So IMS ships a Java library IMSUDB.Jar and one can use this library to create the application, or one can use the resource adapter or the socket level connections and have their own format of the IRM to be analyzed by the exits. So it’s completely customizable wherein the standard fixed format of the IRM can be followed by LLCZ data which will be the specific or the customized formats, and IMAS connect will still handle it.

 

[00:25:07] – Santosh Dorge, BMC (Guest Speaker)
This is another example of executing the API using the Python program, wherein in this program I have Python API being called and the response is being displayed. So I created this API using zOS connect Explorer or the RPython-basedest APIs and then I pushed it onto the connect address space and from there the connection has been established like I was showing on the previous slides.

 

[00:25:51] – Santosh Dorge, BMC (Guest Speaker)
Here, let me go back to the slide. So my python based application accesses the API from GS connect and in turn it sends the information to the IMS Connect. The API converts the API information into the format required for the IMS Connect and then it will execute the transaction into the MPR without modifying the transaction and it can display only the selected information or expose the only required information that is defined in the API.

 

[00:26:39] – Santosh Dorge, BMC (Guest Speaker)
So going back to this slide, this is sample API. I used existing tools to create the API and display it here the information

 

[00:27:00] – Santosh Dorge, BMC (Guest Speaker)
Okay, this is the snap from the BMC me log analyzer summary report summary equal to all. LUOW detail equal to all. And this report when executed using the slds and the BMC me energizer for IMS Connect journal. So what I did, I executed one transaction from the TCP IP interface. And then I got the slds of that time, and I got the journal of the same time and executed the BMC me log analyzer report to see the flow, how the messages, how the transaction was executed. So it was like first few records, it was in the IMS Connect. So IMS Connect, prepare, read. It was continuously reading on the socket. As soon as the message received on this particular port, it prepared the programs, IBM programs prepared it for the read. And then the information was read from the socket. So these two events here, event 60 and 73 indicates the information was received from the client and then user exit entry. I use this AWS sampleman exit and then user message exit exit. So message entered into the exit message is going out from the exit. And what was the written code at that time? What was the reason code at that time? And then it says the message sent to IMS. And again from which IMS Connect, it was received the TCP IP address of the IMS Connect address space and the client port id from where the message was received. And then it went into the IMS data store. One, as you guys know, its message is queued up. And what was the information on the message at the same time, message in queue, then program is scheduled. And then what is the recovery record? These are all log records up to here. And then again messages coming from IMS to IMS Connect. So what we sent to the IMS, what we are receiving from the IMS, again, what exit received what was sent from the exit and then program ended. And then at the end, the IMS Connect ends the socket connection closes the socket connection. So this is complete end to end flow. How the message arrived in the IMS system from the TCP IP world at each point, what was the changes that system had made to the particular message. And this kind of information helps in diagnosing the problem, whether it is erroring in the exit, whether it is erroring in the IMS data store, or whether it is the response erroring out. So it is helpful in the production environment as well as in the test environment when the applications are being tested. Okay, I’m moving to the next slide.

 

[00:31:03] – Santosh Dorge, BMC (Guest Speaker)
So that was all about IMS Connect transaction access through the IMS transaction access through the IMS Connect from the TCP IP applications and how to diagnose the problems message flow what is the format of the messages.

 

[00:31:28] – Santosh Dorge, BMC (Guest Speaker)
Now over to the open data access to the IMS database. SoBMC AMI energizer for IMS Connect enables you to route the workload specific to the specific ODBM address space by providing the virtual alliances and the result grouping. And it enables tracing and journaling at certain points so that diagnosis or the problem determination becomes easy. Let me jump onto the next slide here.

 

[00:32:13] – Santosh Dorge, BMC (Guest Speaker)
Access to online IMS databases from anywhere in the enterprise. Then OpenStack application developers can use relational interfaces without changing IMS applications and database. Then it uses the DDM protocol or the DDM code points. And if I got to speak about ODBM, ODBM receives the database connection request from the IMS Connect as DDM commands, then translates the incoming databases request from the DDM protocol into the DLI calls expected by IMS. IMS does not understand the DDM protocols, so it has to be converted to the DLI protocols and that’s what WoodyBM does for the IMS. And then IMS data manager gets the information to the WoodyBM and ODBM returns it to the IMS Connect and the client application.

 

[00:33:19] – Santosh Dorge, BMC (Guest Speaker)
Okay, that’s what this line says. Translates response to the client into the DDM protocols. This is the picture sample environment how the control blocks in the distributed relational database architecture looks like for the data access to the outside applications without executing the transaction. Let’s say data is stored in the IMS database and application outside the mainframe needs access to the database, certain segments or certain fields in the database, and there is no transaction available in the current IMS transaction manager system. So there is no need to write the transaction. But the applications in the distributed environment, like Java or Python programs, can code the SQL queries to access the information stored in the IMS database. So again, it is type four JDBC type of connection that can be established between application and the IMS Connect port. We call it DRDA port. The universal DB drivers can be used or universal GDBC drivers can be used in order to establish a connection here. Energizer can help you to route these messages to designated or well equipped IMS system through the ODBM. It can route the incoming messages to the data store that is well equipped, that has the resources defined to sort the request, or it can load balance as well between the multiple IMS data stores here. So here when the message comes from the open system application, it is in the format of DDM commands and then it goes as a DDM commands to the ODBM and ODBM converts it to the DLI calls. Then these DLI calls are served by the database manager to access the information stored in the online database and return that information back to the client.

 

[00:36:16] – Santosh Dorge, BMC (Guest Speaker)
Here is more about the information of the message format. DDM commands, reply messages, and chain objects begin with the six byte DSS header followed by the two bytes LL. This is six byte and this is two bytes. This is again two bytes called code points. Code points is a list of functions that are requested through the DDM commands. So every function has like query data or reply data. Every function has different code points defined. This is standard DDM protocol code points that the application has to send to the immune and then PBM can convert it to the DLI calls. Okay.

 

[00:37:29] – Santosh Dorge, BMC (Guest Speaker)
Jumping on to the next this is the example of how to configure the data access in the procliv member of the IMS Connect HWs configuration. The last three lines here indicates the data access. One has to define the DRDA port number and then imsplex and then ODBM connection. Different parameters here like timeout or how long the connection will be alive when there are no messages. So those kind of parameters needs to be defined. The main here is ODA access and then DRDA port equal to and then which IMS place the ODBM belongs to. Then another thing required here is new address space along with the IMS Connect address space. ODBM address space is a separate address space that needs to be created and then there is need for the SCI, OM and RM address spaces which is CSL plex and then one needs to define the IMS catalog as well in order to enable the SQL access to the IMS database.

 

[00:39:03] – Amanda Hendley, Planet Mainframe (Host)
Okay here is example of code again using that ImsuDB jar file which is shipped with the IMS. The data access using the JDBC type four connection. Java SQL is a package in the Java standard Edition library that provides access and interface to the database access using JDBC. JDBC is Java based API that allows Java applications to interact with relational databases. And then IMSUDB.Jar is the interface that interprets the SQL into the DLI request and then those DLI request or the DDM commands are sent to the IMS Connect and then IMS Connect sends it to the ODBM which converts it to the DLI functions in order to retrieve the information.This is a sample here establishing the connection and then creating the SQL for accessing the data. This is accessing the information metadata from the catalog and then printing it. This is simple one, getting the information from the IMS catalog.

 

[00:40:36] – Santosh Dorge, BMC (Guest Speaker)
So another way is accessing similar kind of access using the python. I’m using the JDB API here in the python to establish the connection. Again the similar same driver program here. Again this is accessing the metadata from the catalog database.

 

[00:41:18] – Santosh Dorge, BMC (Guest Speaker)
We’ll jump onto the next slide here us batch launcher. Alternatively you can have the programs running, batch programs running in the mainframe to execute the SQL query on the IMS database and get that information. You can have the complex queries written here like joins and then you can create the report using the SQL queries itself from the image database and send those reports to the business units. JVMldmat is the name of the 31 bit Java 8.0 batch launcher. If you want to use the 64 bit Java 8.0 batch launcher you need to specify the JVM LDM 86 right here. And obviously you need to ensure that the Java home path and the library path settings are done corresponding to the Java SDK installed.

 

[00:42:30] – Santosh Dorge, BMC (Guest Speaker)
So this path and home variables are set so that this program can run and generate the reports or the face the data from the IMS database using SQL. Again you’ll need the DRDA port defined and then data store name. Here it goes through the IMS Connect. Okay, next is let’s say I have the environment established to access the data or the SQL. Now how would I know that? Because there are multiple blocks involved, multiple address spaces involved. How would I know where the program has been errored and how would I capture theirs? This is the BMC AMI energizer for IMS Connect woodybm report. I want to highlight the routing exit here. The energizer has the capability to route the input messages to the well equipped data store. So here you can see ad Niner is the input alliance. There is something called alliances need not to be real or the logical alliance name defined to the IMS Connect. This is pseudo allies and then it routes it to the ED fire which is real alliance. So you can create hundreds of the pseudo allies and the application in the open system need not to know the real allies name or the real data store defined.

 

[00:44:39] – Santosh Dorge, BMC (Guest Speaker)
You’ll not ID the outsiders know what’s the actual data store name or the allies name and you can route the messages to different allies. And this helps in load balancing. This helps in reducing the cost. This helps in better security for the messages and you can get the more information about what was received at the time of routing from where the message was received, client IP address I have masked it here, but you’ll see the IP address of the client machine from where the messages are being received. Then what was the client port client id and then where it has been routed to.

 

[00:45:35] – Santosh Dorge, BMC (Guest Speaker)
Onto the next page. I have different kind of events highlighted here. You can see the event activity description here. Like message was read socket and then DRD command exchange server attributes was received. Then you can see here once the socket read again the DRD command access security server security has been sent. The information related to the user id and password has been received. And then what was the response of server exchange attributes and then what was the response of the access security that’s here this secMac and then DRD reply. So you can trace each and every interface, what was received, what was sent to the next interface, and so on and on. The response also what was received from the IMS, what was received from the ODBM into the IMS Connect and to the client. These control blocks here DRD reply query data sake and the query data. We do not display the real data because it can contain the secured application information. And as part of the energizer we take care of not displaying the information sensitive information here. And then you can see the trigger event completed. So from here to here till one SQL executed what all events happened, which PSB was allocated, which database was accessed through that PSB. Everything that one needs to know is captured as part of these events and reports that helps in diagnosing the problems and understanding the message flow. I use this as a good training tool for the newbies in the team to explain them about the flow of the messages. I generally execute one transaction or one SQL query and then get the journal, get the slds switched and then run these reports to see how the messages were flowing through the system, at what point each and every message processing was done, what information was changed and this builds or improves my knowledge on the system.

 

[00:48:59] – Santosh Dorge, BMC (Guest Speaker)
Okay, I’m going to speak about the BMC AMI Energizer for IMS Connect traces we have BMC AMI Energizer for IMS Connect traces in the BMC AMI command center for IMS. You can see here messages from what message was received from the client. This is the graphical user interface where it connects to the IMS Connect or it connects to the energizer through the BNC owned address spaces like CPC UIM and these address spaces facilitates the information from Energizer to BMC AMI command center for Analytics and visualizing the events occurring in the IMS Connect real time events so if I speak about specific to the traces, it displays the message from the client and then it displays the readable format of it. What was the message id and what protocols were used and there is a lot more information around what was received then message from the IMS. So what was received from client, what was received in the response from IMS and what was sent to the client after changing the message in the user exit. And then if one needs to look into the dump records, they can click on the record tab and see then what was the result in the exit, whether it erred in the exit or what was the response code, what was the written code. One can click on this tab and see the information and if one needs to know the routing information, what was the input data store requested and what was the target data store that served the input message that can be seen here. So this is good graphical view. Those who likes to look into the more details of each and every trace record takeies, they can go to trace records and see in the statistic tabs here it displays the visualization of the events occurring in the IMS Connect.

 

[00:51:55] – Santosh Dorge, BMC (Guest Speaker)
Okay so we categorize traces mainly into transaction traces and event traces. There are many more types of traces like command traces and the journal traces, but mainly when you execute the transaction, it’s all information in the transaction trace. Then what you can see here is inbound message and then what was the commit mode used and then who executed user id, the client id, user id and what message was used data store and then response time of that particular transaction. This kind of information can be seen here clubbing together the events and then if one needs to see the details, like they can go inside, they can select this trace and they can see detail what was sent to received from the client, what was sent to the IMS, how the processing was done, what was modified by the user exit before sending it to the IMS Connect to the IMS OTMA and so on. So same is true with the outbound response message from the IMS message to the client and in the outbound message here, if I go inside the detail, if I select it to display the detail, it shows the IMS Connect control blocks. What was the information that was stored in the IMS Connect control blocks and that helps in diagnosing the problems.

 

[00:53:51] – Santosh Dorge, BMC (Guest Speaker)
Here is the example of event traces. Again, each end to end what happened? Like begin accept socket, prepare socket, read socket, then user message exit entered it written from the user message exit and then message was sent to the OTMA and then message was received a response was received from the OTMA and so on. Again the response entered into the user message exit and then closed socket and event completed. So this is for the transaction access and if you see here the event sockering for the SQL access, you can see this prepare socket, then DRDA command and then message was sent to the ODBM received from ODBM as in reply and then write socket event trigger complete up to here. So this is good information about let’s say if something is error here and message was not sent to the OtMA.

 

[00:55:08] – Santosh Dorge, BMC (Guest Speaker)
So this helps the developer of the application to identify where exactly the error has been occurred and target that area or correct that area of the program so that the transaction or the SQL access to the IMSDB gets successful.

 

[00:55:30] – Santosh Dorge, BMC (Guest Speaker)
Okay, so I’m going to speak about the challenges in the different SLDCC phases and then I think I have second last slide about collaborating the dbas with the developers.

 

[00:55:55] – Santosh Dorge, BMC (Guest Speaker)
This is what I was speaking about, the RND staff resolving their own errors instead of getting in touch with the system administrators or dbas and dbas working with the DevOps automations in order to achieve or run their stuff seamlessly.

 

[00:56:17] – Santosh Dorge, BMC (Guest Speaker)
So getting back to the challenges in the HDLC phases problem identification during the development and testing phases of the HDLC, generally what happens if the transaction has been errored or if the developer is creating application, modifying an application, or enhancing the existing application? Then when he tries to run the application and connection does not go successful. So he would need to know what is causing the connection not going through. It could be the application itself not getting the connection done with the IMS Connect sockets or the port and it could be IMS Connect rejecting the connection, maybe because of the password error or any error happening in the exit. And if developer doesn’t get to the point why the connection is not successful, then it’s going to be time consuming and it’s going to cost to the organization. So another challenge is where in the MS system input message request error. So one would need to know where the error has been occurred so they can rectify it, then locating the written codes and reason codes for the failed messages. And these kind of challenges can be resolved using BMC me Energizer for IMS Connect. Again, these are the challenges for the OpenStack developers QAS. On the other hand, if we see the system programmers or the system admins, they have the challenge like all the input message request messages are routing to the appropriate IMS data store. If one system or one LPAR is overloaded, that adds to the cost cost in terms of cpu. Is the workload balanced across all available image data stores? If one IMS data store is heavily loaded then probably everybody is using that IMS system and the other IMS system is not being utilized to the optimum performance. If this is happening is what system programmer or the system admin needs to know. Is the test system available to the developers and QAS with no downtime during the test cycle? So when time comes to deliver the developed software to the production, the testing cycles or that duration is crucial for the QA teams and the test system needs to be available to them all the time. There can be team one request, some configuration changes and at the same time team two is running their scripts, automated scripts. So if the system goes down for the ad hoc changes then that impacts the other testing teams. So if system programmers and system admins can do the ad hoc changes without bringing down the IMS Connect or the ODBM address spaces, that is beneficial and that’s what the other major players in the IMS Connect and ODBM look for. And again, production system is the prime responsibility for the system programmers and the system admins. When it comes for the IMS Connect or the ODBM address spaces, then the major question here would be is there a mechanism in place to trace any failures that might occur and can be pieced? If the failure occurs in the production system, the first thing is everyone would like the failure will not occur and if it occurs everyone would like that it would not repeat. So if it can be fixed permanently, there has to be some input or there has to be mechanism that helps in identifying the problems occurring in the IMS Connect or the ODBM address spaces. And I would say it is events and traces then changes to the production connect environment with no downtime is one more crucial thing to do. There could be new addition of data stores or the changes happening to the IMS Connect environment and system admins. Or the system programmers shall be able to do it with no downtime like adding the new virtual exit or defining the new exit to the IMS Connect. And the system programmers should be able to do it without bringing down the IMS Connect address space.

 

[01:02:18] – Santosh Dorge, BMC (Guest Speaker)
Okay, I’m jumping onto the next slide evolving IMS Connect environment.

 

[01:02:27] – Santosh Dorge, BMC (Guest Speaker)
I’ll jump to the next one here IMS Connect to the artificial intelligence models. There is lot more data that comes into the IMS Connect and real time access to the IMS Connect data through the BMC AMI Energizer for IMS Connect events and traces. These events and traces contains lots of information about input messages, output messages and this information can be used to the input for the AI models. We have something called data train from BMC AMI data stream for IMS which extracts the real time information from some of the events related to the security in the IMS Connect and most of the information from the real time logs. And then that information is created as a data and can be used in the AI models as a data train. Another thing we have here is offline access to JSON formatted or the CSV formatted journal reports which can be used as data sets in the machine learning. The reports include like transaction response time or the connection history and SQL activity through the ODBM. So this information can be used as data set in the machine learning to drive the insights. So transaction event traces like I showed on the previous slides in the BMC AMI Energizer for IMS Connect journal reports can be used in descriptive, predictive and prescriptive analytics. There are different phases of the analytics I’m speaking about here. One is descriptive, it’s based on the historical data, like what happened and why that particular condition occurred in the system is a descriptive analytics. This is where we are at the moment. We are analyzing the data based on events occurring in the IMS connect and then we display certain analytics. Why know it can derive why certain events occurred. And next phase would be like predictive predict what’s going to happen in the system. And next to next phase is prescriptive analytics is suggesting the solutions for the problems that will be predicted. Okay, another thing I wanted to point out here is steps in the data mining is like understand the business data. Mostly BMC doesn’t have the access to the business data and every business house has their own data. So it’s for those people who want to create the AI models on their data. Be it be descriptive, be it be predictive or be it be prescriptive analytics. It’s like they will have to understand the business and the data, prepare the data from existing reports on the mainframe. They can use the CSV or JSON format reports, or they can use the journal traces and events, transaction and event traces here to create the AI model.

 

[01:06:58] – Santosh Dorge, BMC (Guest Speaker)
Okay, again, involving IMS Connect. This is what I was speaking about, the statistics. We are in the descriptive phase of the analytics and we provide the descriptive chats to the customer. What’s happening in their IMS Connect environment, what kind of message size, message rates, or what kind of events they are receiving. There are different kinds of reports here. This is exit summary chart is one the users can see input message size or output message size, then number of messages errored and so on. Lots of charts are designed in the statistics tab.

 

[01:07:58] – Santosh Dorge, BMC (Guest Speaker)
Okay, this is probably my last slide to explain. Security and data recovery were among the top priorities of extra large organizations in the recent BMC survey. Then it executives have the highest value perception within organizations towards integrating and automating database changes in CI CD pipeline. Again, different persons have different goals. Shift left. DBA developer collaboration this is what I was speaking about developers.  Between mainframe developers or OpenStack developers with self service capabilities and tools to include database changes as part of the DevOps process. They themselves can identify the problems occurring in different address spaces for their own testing. And then DBS can benefit from the DevOps automation and best practices while ensuring database changes follow data management best practices and comply with the current standards. Okay, learning more at BMC John O’Dowd and David Schipper are product owners here for the IMS. This is BMC documentation links for user guide. For each of the offerings you can scan the QR code and it will take you to the BMC Docs page.

 

[01:09:50] – Santosh Dorge, BMC (Guest Speaker)
Ashley, thank you. And video and slide for this presentation will be posted at the virtual group web page. If you got any follow up questions you can write me email. Thank you very much guys. Any questions or anybody wants to add anything?

 

[01:10:18] – Attendee
I have got a question. Am I audible?

 

[01:10:25] – Amanda Hendley, Planet Mainframe (Host)
Yes.

 

[01:10:28] – Attendee
So we said like hey, to access this we require IMS catalog. Right now. One thing is like loading the DBs out there that hey, definitions out there and we also need to load the copy book, right? How do we load the copy book and how do we handle the occur clause and redefines while it is getting loaded to IMS catalog?

 

[01:10:59] – Santosh Dorge, BMC (Guest Speaker)
Well IMS catalog stores the information metadata about the databases. We don’t have to define the ocker clause or we don’t have to load the copy books into the catalog. It’s metadata about the IMS databases. Like what are the separate fields of each segment as in table right?

 

[01:11:30] – Attendee
Yeah. And that’s like DBD will have what is the database name? What is a segment, whether it is on level one, level two, all those PSB is what has like hey, what database it can access those definition. I am able to load it. Basically we are trying to go this downhill and we are stuck kind of thing wherein hey I loaded this. Now when I’m going to write a code in Java I’m going to write select field one, comma field tw. That field one, field two has to be defined in IMS catalog and that will usually come from a copy book name or somewhere. What is the easy way to, yeah.

 

[01:12:08] – Santosh Dorge, BMC (Guest Speaker)
That will come there, there is, there is standard procedure to define the catalog. When you define the catalog or when you add the ACB to the catalog, the database ACB to the catalog, it creates the fields that are already defined in the DBD.

 

[01:12:36] – Attendee
Okay, so then you are saying like in DBD you have to specify byte by byte. Like hey, first ten byte is field this, next ten bytes.

 

[01:12:43] – Santosh Dorge, BMC (Guest Speaker)
Yes.

 

[01:12:45] – Attendee
Okay.

 

[01:12:45] – Santosh Dorge, BMC (Guest Speaker)
Generally in the old systems which didn’t have the IMS, I’m not sure if we have the alternate way available. Maybe catalog experts in the BMC would answer. And if your BMC customer, we can do the specific targeted sessions with you guys. You can request that to Dave Schipper and John O’dowd. Let me show that slide to you. You can write email to Dave Schipper and John O’dowd and we’ll arrange the workshop or target sessions for you guys. Now coming to your question, when you define each and every period for the segments that automatically gets populated into the catalog at the time of catalog population. And that way you don’t have to insert the catalog or you don’t have to modify the catalog from the IMS explorer.

 

[01:13:53] – Attendee
And that’s where I read about that. Hey, I can do that in a DBD source itself. I can give all the fields definition kind of thing, what should be the external name. But then what it means is like every time I have to go and update my DBD and whenever I change a DBT, it means it is going to be to harden it. It’s an outage that is required. When we go through a copy book route. It is just like, hey, a copy book changed. I take the copy book and reload it to the catalog and the field is now available for application.

 

[01:14:31] – Santosh Dorge, BMC (Guest Speaker)
Okay, how to do that? We can do the, yeah, I think.

 

[01:14:35] – Attendee
We will require some sessions and I can drop that email?

 

[01:14:39] – Santosh Dorge, BMC (Guest Speaker)
Yeah, definitely. Which organization you work with?

 

[01:14:45] – Attendee
JP Morgan

 

[01:14:47] – Santosh Dorge, BMC (Guest Speaker)
Yeah, you can write the email to John and Dave Skipper and we’ll get the expert in that area to speak with you guys or do the session or workshops.

 

[01:15:01] – Attendee
Okay.

 

[01:15:05] – Santosh Dorge, BMC (Guest Speaker)
Thank you.

 

[01:15:12] – Attendee
Any of your client in real production is using this. And where I’m going is I want to understand when someone is writing a select star there and we are coming like you are converting the DDM to DDL and DLI and then executing it. So a lot of data is getting in the network, right. How is that this going to add to the performance or to the network chatter.

 

[01:15:37] – Santosh Dorge, BMC (Guest Speaker)
IMS Connect supports the 32k response at a time, so that’s expected if you are expecting the huge response from the IMS Connect to outside application. Obviously the backend applications IMS Connect is a lot faster than the front end applications and that works. But as you say, yes, it takes time to complete that SQL execution.

 

[01:16:11] – Attendee
Okay.

 

[01:16:21] – Amanda Hendley, Planet Mainframe (Host)
Any other questions? Okay, it looks like there is a little bit of chat happening. If everyone wants to capture the contact information on the page, we’ll leave it up another and all right, let me get my screen share. Santosh, I want to thank you for presenting today. As you mentioned, we will be posting the video and you’ll have the video, the transcript and the presentation deck available to you so you can go back and look at it later. And that should be in the next couple of weeks. If you are not already on our mailing list, please do. Also, on the user group page, join the mailing list. You’ll get newsletter, you’ll get event announcements and meeting announcements and you don’t want to miss those. Before we head out today, I just want to put a call out there that we’re looking for session ideas and speakers for this IMS user group. So if you have any great sessions that you think other people would benefit from or anything that you have questions about that you would like us to get a presenter on, we’d love to get your ideas so that we can finish building out our schedule next year with the topics that are very important to you.

 

[01:18:11] – Amanda Hendley, Planet Mainframe (Host)
So you can drop those to me in chat today or you can send them to my email address. Ahendley@planetmainframe.com. Again, another quick plug for the Arcati Mainframe Yearbook. Don’t want you to miss out on that opportunity. And we certainly want to have a robust survey. So check that out. If you are not already involved with our user groups or, I’m sorry, with our social media on our user groups. Those have recently changed, but not since the last meeting. So if you are on Twitter, Facebook, LinkedIn or YouTube, you can find us. And in a few cases we have started to combine the channels so that you don’t have to look for Db2 and IMS and CICons as separate channels. You can find all the data, all the information in one place. So that is where we are located on social media. And again, I want to thank our sponsors BMC and Planet Mainframe for partnering with us for this. If you are looking for more information about BMC, they’ve got a lot of information online. Santosh also shared some links with you as well. But great blog and resources available to you and I want to give you a chance, if you’ve got any topic idSantosheas, to drop them in.

 

[01:19:39] – Amanda Hendley, Planet Mainframe (Host)
But for now, I want you to save the date for February 13, 2024. Oh my gosh. It’s already the last couple of weeks of the year, so February 13 will be our next meeting date. That is a Tuesday, same time, same place. So watch out for the event announcement and we’ll post it online shortly. All right. With that, I hope everyone has a wonderful rest of the day. Santosh, thank you again for being our speaker today.

 

[01:20:10] – Santosh Dorge, BMC (Guest Speaker)
Thank you.

 

[01:20:11] – Amanda Hendley, Planet Mainframe (Host)
All right.

Upcoming Virtual IMS Meetings

April 9, 2024

Virtual IMS User Group Meeting

Steve Nathan
IBM

June 11, 2024

Virtual IMS User Group Meeting

August 13, 2024

Virtual IMS User Group Meeting