Virtual Db2 User Group Sponsors

IntelliMagic

Virtual Db2 User Group | January 2024

Driving Down Database Development Dollars

Marcus Davage, Lead Product Developer
BMC Software Ltd

Application developers and database administrators alike are under constant pressure to drive down costs and reduce CPU cycles. This talk suggests some technical tips and cultural changes to Developers, DBAs, and Operations, to address this ubiquitous challenge facing mainframe enterprises world-wide.

Marcus Davage, Lead Product Developer

Marcus DavageHaving graduated in Computer Science at Aberystwyth in 1990, Marcus has worked with mainframes ever since, as a programmer, systems programmer, consultant, database administrator, production support hero, production support villain and management piñata for various companies. He is now a Lead Product Developer for BMC Software. He holds an MBCS, CEng, CITP, and, more often than not, a glass of wine. He is an IBM Champion, an IBM Z and LinuxONE Influencer, Open Mainframe Project Ambassador, Joint Chairman of the UK Db2 User Group, a member of the IDUG Content Committee and the GSE UK Conference Planning Committee. In his spare time, he is a STEM Ambassador, leading a Code Club in a Welsh-medium junior school, and translating Raspberry Pi Foundation resources into Welsh. He blogs and casts pods, too. Apparently.

Read the Transcription

[00:00:00] – Amanda Hendley (Host, Planet Mainframe)
Welcome, everyone. See some familiar faces coming in. Thank you for joining us today. We’re going to just make sure you’re on mute. We’ll take q and A afterwards. Welcome. All right, we are two minutes after our start time, so I’m going to go ahead and get started. Welcome. My name is Amanda Henley. I am managing editor at Planet Mainframe, but I also have the privilege of hosting the Db2 user group. So I’m excited to have you here today. And we’re going to hear from Marcus Davage. So, Marcus, welcome.
 
[00:00:44] – Marcus Davage (Guest Presenter)
Thank you.
 
[00:00:46] – Amanda Hendley (Host, Planet Mainframe)
And if you haven’t noticed, we have a new URLrun-through for the user groups as a part of the recent formal merger with Planet Mainframe. So virtualusergroups.com is now the landing page for all the user groups. So you can check that out and find information about all of our upcoming sessions there. So let’s get started with a brief run through of our agenda. Nothing’s really changed. Too much on it. We’re going to do our introductory remarks. As always, we’ll hear from Marcus for his presentation, have some time, and then we have a couple of news and articles and announcements and then we’ll announce the next session. So let’s first thank our partner. Intellimagic is a sponsor of the Db2 user group series. So check them out. They’ve got a lot of resources for you and we couldn’t do this without the support of our sponsors. Now, for today’s session, I mentioned Marcus is joining us for driving down development dollars. If you haven’t met him, he is a graduate with a computer science degree and he has worked on mainsense in various roles, including production support Hero, production support villain, and a management pinata.
 
[00:02:22] – Amanda Hendley (Host, Planet Mainframe)
I love that. He is now a lead product developer for software at BMC. and He holds an MBCS, CITP, and more often than not, a glass of wine. He blogs, he podcasts, he is probably sitting on the committee of your favorite tech conference. So with that, Marcus, thank you so much for joining us. I’m going to let you take over the.
 
[00:02:51] – Marcus Davage (Guest Presenter)
Thank you very much, Amanda. Let me share my screen. There we go. Okay, off I go. So yes, as Amanda said, I’ll be talking about tips for application programmers and DBAsCPUcost-cutting? To help reduce costs. As she said, I’m a developer for BMC software working on the AMI data solutions and this will be our agenda for my presentation. I’ll be facing the challenge of CPU savings, picking out some characters, and I’ll be talking about the collaboration and then hopefully a conclusion. So what’s the challenge. How many of you have been through cost cutting exercises on the mainframe? I’ve lost count of the headcount reductions that I’ve narrowly missed from. We are all outsourcing all of our developers to insert country here to. Oh dear, they can’t even write COBOL. Oh, too late. Our developers are gone. I’ve gone from where we are going. We don’t need no DBAs to please come back. We need you. We are sorry. Or this software is too expensive. Oh no, the ISV has stiffed us again. Or let’s move this workload off the mainframe to save costs. Oh dear. The mainframe costs haven’t gone down, and now I have to pay additional server and staffing costs.
 
[00:04:47] – Marcus Davage (Guest Presenter)
Or the mainframe is too expensive. Let’s replace it with thousands of servers instead. Whoops. The bank has gone down. Or this mainframe is too expensive. Let’s chuck our workload up into the cloud. Oh, rats. Our cloud provider hiked their prices up by 29%. So anyone who has ever worked on the mainframe has faced challenges like this at one time or another. The challenge of cost savings. So, mainframes, as you all know, are designed to be used by large numbers of people. And most of those who interact with mainframes are the end users, people who use the applications that are hosted on the system. However, because of the large numbers of end users and applications and the sophistication and complexity of the system software that supports the users and applications, we need a variety of roles to operate and support the mainframe system. So in the it field, these roles are referred to by a number of different titles, like system programmers, sysadmins, security administrators, database administrators, operators, production control analysts, schedulers, performance and capacity analysts, applications designers, architects, and programmers each have their own role to play. But once established, the machine workload can increase, the users can increase, but the infrastructure support headcount doesn’t have to.
 
[00:06:17] – Marcus Davage (Guest Presenter)
So with some judicious jiggery pokery and development, the costs can be kept down. So let me introduce you to our four main characters in my story today. The developer, the DBA, the operations, and the DevOps consultant. So here’s a stereotypical millennial software developer. Ponytail, hoodie, geek t-shirt running code on a Mac. Sit in a coffee shop turning coffee into code. Probably never seen a green screen in his life. I myself started my own mainframe career as a developer, writing COBOL and Cincom MANTIS . Anyone remember that? Accessing Cincom SUPRA database, I then moved on to Natural and ADABAS, ancall-outsd then onto Db2. And when I became a Db2 sysprog, I wrote in Assembler and REXX. And then the hubris kicked in and I looked down on dumb developers who were writing inefficient code, bringing my finely tuned, beloved Db2 subsystem into disrepute. After a while consulting as a Db2 consultant, I became a DBA, designing tables, doing production call outs, becoming a management pinata. I spent a lot of time writing REXX applications to improve productivity, performance, monitoring recoveries and reporting. But now I’m a developer again. I approach my job with a newfound respect, admiration and backed up by 20 years of DBA performance tuning.
 
[00:08:08] – Marcus Davage (Guest Presenter)
So what are the challenges facing a developer? IBM mainframe application developers they face these key issues, many of which are ongoing concerns in the mainframe space, such as skills gap. As experienced mainframe developers retire, finding and training new talent becomes increasingly critical. Security and compliance security remains a top concern, and mainframes store a vast amount of sensitive data. Developers need to stay updated on security, best practices and compliance requirements, especially with the evolving landscape of data protection regulations. DORA is a data protection regulation in Europe which is raising its head as we speak. Integration challenges integrating mainframe applications with newer technologies and getting legacy applications to work in the 21st century, developers have to find efficient ways to bridge the gap between legacy and modern systems. Modernization demands there’s a continued pressure to modernize legacy mainframe applications. Businesses seek to leverage cloud computing and microservices cost management mainframe environments can be costly and businesses are always looking for ways to reduce operational expenses. Developers are under pressure to optimize code for performance and resource utilization, and the need for greater agility and faster development cycles. Push mainframe developers to adopt DevOps. Automation tools are used to streamline development and the deployment processes, and many applications have dependencies on older technologies, so developers need to manage these dependencies whilst modernizing applications and so making sure that their applications are compatible.
 
[00:10:26] – Marcus Davage (Guest Presenter)
Software developers were of the many unsung heroes of the pandemic, working furiously in the background to enable companies to shift to digital overnight, pushing the boundaries of innovation providing the backbone of digital infrastructure to manage millions of new online users working from home. Since then, being a developer is not an easy job. They are under constant pressure to deliver more code more quickly, more often more correctly, with shorter cycles, tighter deadlines, changing requirements, heavier workloads, bug fixing, testing. They’re expected to innovate, keep up to date with coding practices, and while always reducing costs. Surveys have been acknowledging the increasing pressure for businesses to transform throughout the course of the pandemic, driving a new pace of change that a recent Cisco AppDynamics survey of 1050 technologists across eleven countries said was pushing businesses to transform to digital first operations three times faster on average than they had before the pandemic hit. This acceleration had created a complex patchwork of legacy and cloud technologies, the report’s authors noted, with the additional complexity driving 89% of technologists to report feeling under immense pressure at work. So what can we do to help?
 
[00:12:30] – Marcus Davage (Guest Presenter)
ore than can be included in one presentation. So this is a Db2 user group after all. So let’s focus on reducing costs on what a Db2 developer and DBA can do. So let me show you a few challenges and examples that I’ve picked up on my journey. So, careless coding costs we have two unrelated tables here. Table one, table two, an id column, the primary key, and a column containing some kind of text. Oops. Similar data types, similar designs, but unrelated data. A cartesian product is where two or more tables are joined on no predicates, so that for each row in table one you get every row in the joined table. This is also called a cross join, probably because the DBA will get very cross if he or she catches you doing one. So for each row in one table, you get all of the rows in the other table. So this is a senseless result set because the tables aren’t necessarily related. So if you do join tables, use a sensible join clause and make sure the data returned is sensible, consistent and related. And thirdly, don’t use select asterisk, only select the columns and rows you’re interested in.
 
[00:14:21] – Marcus Davage (Guest Presenter)
This reduces IO, CPU and network traffic. Here’s another I’ve actually seen this select current date into host variable from a 70 million row online customer table. No qualifiers, no where clause, no predicates, no really, this statement was executed per transaction in a live banking system, so a quick win first answer select current date into host variable from SYSIB.SYSDUMMY1. Db2 recognizes the statement as calling an internal function against SYSDUMMY1, and it performs no IO. However, in the second answer, Db2 gets the date register from the environment variables without the overhead of a select. But why should you bother asking Db2 to ask the operating system to get the time when you can do it yourself? So in COBOL is the current date function, in assembler is the stored clock function, and in JAVA you can use a ginny wrapper for the z architecture store clock instruction. So what’s wrong with this? Select current application compatibility into host variable from SYSDUMMY1. Nothing looks wrong with that. It uses SYSDUMMY1. The optimizer recognizes this, and it performs no IO. So what’s wrong? Well, nothing’s wrong unless it’s in the middle of a loop and it’s executed for each iteration of that loop, which could be in the hundreds of thousands.
 
[00:16:33] – Marcus Davage (Guest Presenter)
Just do it once outside the loop. The APPLCPUnon-matching compact isn’t going to change suddenly during the execution of a package, is it? Next, what happens when you miss the first index key of a where clause? You may get a CPU intensive non matching index scan, or worse, a table space scan. So provide as many indexes index predicates as you can in your where clause. Next, what happens when you miss the first index key of a where clause, but its cardinality is one? Sorry, when you don’t miss the first index, but its cardinality is one. You start off with the matching index scan, but Db2 then realizes it’s scanning all the index pages because they all qualify, and it might switch to a sequential scan in the middle of your unit of work. So speak with your DBA. Do some data analysis. You might need to change the order of the key columns in the index in order to reduce the initial number of the results first. I’ll talk about this in a little bit more detail later on. So here’s some SQL tips for your Db2 developer.
 
[00:17:55] – Marcus Davage (Guest Presenter)
I would assert that 90% or more of Db2 performance issues and CPU costs are down to application SQL.So here’s some hints and tips. Optimize your SQL queries using appropriate indexes minimize table space scans reduce unnecessary joins, and reducing sorts. Reduce data transfer. I’ll be talking about this a little bit more detail later on, but to minimize the amount of data transferred between the application and Db2 database, make sure you fetch only the columns and rows needed. Don’t do select asterisk use index predicates. Try not to sort if you can help it. If you have to sort, can you use the clustering index fetch first n rows or fetch first n rows only to limit the result set? Use native SQL store procedures or user defined functions that embeds the process into the Db2 engine, and it’s free as it uses the zip processors and that drastically reduces network traffic as well. When using dynamic SQL, try and use parameterized queries rather than hard coded values so that when Db2 prepares the statement in the dynamic statement cache, they can be reused and it can avoid recompiling them each time, and that saves CPU cycles and reduced costs.
 
[00:19:46] – Marcus Davage (Guest Presenter)
Connection pooling implement connection pooling to reduce database connections, reducing the overhead of establishing new connections for each transaction. Consider using high performance DBATs, particularly with off mainframe applications that use ODBC or JDBC.
 
[00:20:08] – Marcus Davage (Guest Presenter)
Data compression. Consider using data compression to reduce storage costs without sacrificing performance that much and efficient data types use appropriate data types to minimize storage requirements. For example, use an integer instead of char for storing numeric values. Use date instead of integer or char for storing dates. If you’re using lobs, you need to analyze the lob data to see how much of the lob space is actually occupied by data, and then maybe store them as inline lobs within the base table. This drastically reduces IO and reduces unnecessary IO, and then they will be eligible to use table compression, which is far more efficient than lob compression. And if you have to, you can put the table in a bigger buffer pool to get more rows per page. However, if your lob data isn’t accessed that frequently, then that actually might be a waste of time and a waste of space. So keep them in the lob table space so your mileage may vary. You’d have to do some analysis, but inline lobs can be drastically efficient. Indexing strategy create your indexes based on query patterns I’ll talk about this a little later.
 
[00:21:42] – Marcus Davage (Guest Presenter)
Avoid creating unnecessary indexes that add to the overhead for maintenance, and by maintenance I mean inserts, updates, and deletes. Avoid stage three predicates so stage three predicates are those applied within the code and outside Db2. So Db2 has provided you already with the result set, and then further pruning of the data is called stage three. Let Db2 give you exactly the results that you require. Don’t filter out rows within your code, it’s a waste of network IO and CPU. Collaborate with the DBAs collaboration is a big point. Work closely with the DBAs to understand configurations, tuning options best practices buffer pool placement do you have reference tables that are accessed so frequently that locking them in memory would be a good option? Seek their guidance on optimizing the physical design of tables and indexes. Reordering the index keys based on cardinality, including index columns sorry, including columns in an index, or changing the clustering index to reduce sorts. Invite them to agile scrums. Resource constraints be aware of resource constraints when designing and coding your applications. Optimize your code to work efficiently within CPU, memory and IO constraints. Try and use lock avoidance techniques where possible.
 
[00:23:24] – Marcus Davage (Guest Presenter)
If data is read and consistency is not crucial, use withur to reduce locking and try to avoid sort as much as possible. Stay updated on the latest EB two features by attending conferences, seminars, webinars, joining forums, adding yourself to discussion groups. I’ll talk about vs code Db2 extensions a little later on, but they will help you build, validate and format and test your SQL statements. And finally explain the DB to explain feature to analyze query execution is the best performance tools can be used with visual explains to gain a better understanding of how your query is performing. So by following these tips, you’re on a good way to reduce to contribute to cost reduction while maintaining the performance and reliability of your applications. So let’s have a look. Let’s have a look at query optimization so this is a drastically simplified diagram of the internal processing of an SQL search argument. From whence came search argumentable or sargable? Or sargable depending on which side of the pond you live. So starting on the right, your users, they fling their SQL at Db2. They use spoofy, QMF or statements bound in end user programs.
 
[00:25:16] – Marcus Davage (Guest Presenter)
Whatever it is, if it’s dynamic, the statement is prepared. If it’s static, the package has been bound. Either way, the access path to get at that data has been optimized as best as the Db2 optimizer can given the information in the SQL statement itself and the stats of the tables that the statement is querying. And so the optimizer, he looks at the SQL statement and determines which of the predicates in the where clause can be satisfied using index access. A stage one predicate is evaluated by the data manager and that’s evaluated first. The results of this are then passed to the relational data server, RDS, which applies stage two predicates. The more data passed between the data manager and RDS, the more overhead. The less data passed, the less the overhead. So it’s best to get as much data as possible filtered out using the data manager to reduce your overheads. So all indexable predicates are stage one, but not all stage one predicates are indexable. So let’s have a look at an example. Simple four column five column table two key columns, three data columns. So player one provides only one of two key columns.
 
[00:27:19] – Marcus Davage (Guest Presenter)
The search argument is indexable key call one equals one and it’s stage one, meaning the data manager scans the index for key call one entries matching one. There are two of them. It fetches the data, passes it to the RDS, who passes them all back to the user because the user’s only asked for key call one equals one, of which there are two rows. Player two, he provides both key columns. The search argument is indexable. At stage one, both key columns are provided, there is a unique match and only one row is returned to rds, who then passes that back on to the user. Player three provides one key column and one data column. The data manager retrieves the two rows for key cole one and passes them to RDS. RDS applies a filter to the stage two nonindexable predicates. Data call one equals cake and passes the remaining row to the user. Layer four doesn’t provide any keys, no stage one sageable arguments the data manager has to do a table space scan passes all of the rows to RDS, which then applies the stage two predicate of data call one equals cake to filter only two rows after a table space scan.
 
[00:28:54] – Marcus Davage (Guest Presenter)
These are then passed into the sort which uses dsndbo seven work database. Sorry, who then passes the data back to RDS and then back to the user to return two rows containing cake. The optimizer is pretty good at rewriting queries to be more efficient. In fact, it’s probably better at it than you. However, there are many things you can do to improve your own SQL performance. For example, don’t use scalar functions on columns in your search predicates. So for example, in example number one, the first query is stage two. Because a function is specified on a column Db2 queries, each row in the table space applies the year function to the higher date column to get the desired result set then passes that data to RDS. Using an alternative approach with the same result makes the query stage one and faster and cheaper by only returning the rows between those dates such as this. So higher date between January the first and December the 31st. This makes this makes the query stage one and indexable and faster and cheaper. Talking of betweens, what’s the difference between a between and a between or rather greater than or equal to and less than or equal to?
 
[00:30:48] – Marcus Davage (Guest Presenter)
So it’s possible to use between to compare one value to two columns. But this is stage two, and Db2 has to evaluate each row returned from the data manager. We’re host variable between column one and column two. Once again, Db2 is getting all of the data. Instead, do this narrow the amount of data that’s being returned where host variable is greater than or equal to column one and host variable is less than or equal to column two. This is stage one evaluated in the data manager, and only the qualifying rows are passed back to RDS. So for example, code your most restrictive predicates first. Here’s an example. There was a software product which had the first three columns of a table of. Most of the tables were company product policy. Company cardinality was always one, an integer of value one. Because the company only managed its own products, the product column was around 20 to 30, ten to 20 cardinality. That was a three byte character, and the policy column was an integer between one and nine nine. So if the cardinality of the company column was one, Db2 was going to do a non matching index scan.
 
[00:32:47] – Marcus Davage (Guest Presenter)
Well, yeah, it wouldn’t match a matching index scan, it would just match all of the entries. If the product cardinality was around ten to 20, it would bring back that percentage of rows, but the policy was an integer between one and a billion. If you specified the policy column or key in the index first, then Db2 will be reducing the number of rows returned from the data manager tremendously. So know your data. Understand what the relationship is between the key columns and the actual data, what the cardinality is. Specify the higher cardinality columns in your indexes first. Good news is IBM are adding more stage one predicates with each version. But that doesn’t mean you should be lazy. So fetch only the columns and rows that are needed. Don’t do select asterisk use index predicates join index columns if you must sort. Try and use a clustering index instead of an order by always join on clustered columns, but avoid sorting if you can. Plenty of presentations around explaining what the fetch different permutations of the fetch statement are. Fetch first n rows fetch first n rows only. They could be used to limit result sets, scrollable cursors, or next nros optimize for n rows various ways of reducing data transfer.
 
[00:34:45] – Marcus Davage (Guest Presenter)
Also, native store procedures reduces network traffic. They’re reusable and they’re free if they use the zip. What’s your index strategy? Obviously, index on a primary key do try. Well, not do try, do index on foreign keys always join on clustering indexes when using RI, always index on foreign keys and join on cluster on foreign keys. And finally, this rather OD bullet point. If your select performance is being degraded, then add an index and add another index for another query. Add another index for another query until the DML performance starts to degrade. So when your insert, update and delete performance are taking a hit, then drop the last one. Drop the last index that you just created. Here’s a little trip down visual studio code. It’s a free source code editor made by Microsoft for Windows, Linux, macOS, and now Raspberry PI. It’s got support for debugging, syntax highlighting, intelligent code completion, snippets, code refactoring, and embedded git. It’s customizable, it supports almost every programming language, and it’s got a lot of contributed contributed extensions, including support for ZOWE and Db2. IBM’s Zopen Editor is an extension for visual studio code that provides language support for COBOL, PL/I, I high level assembler, and JCL.
 
[00:36:44] – Marcus Davage (Guest Presenter)
This is some COBOL, this is some assembler. It’s nice formatting and coloring. The Db2 developer extension allows you to query databases and objects, indexes, packages, plans, tables. You can browse them and get metadata about them. You can issue SQL statements. There’s that wonderful Cartesian product with the result set, and you can save the result set in vs code or as CSV files. There’s another extension called Db2 connect. You can browse objects in the same way, but what’s nice about this one is that you have code snippets which help you design your code. So it completes merge, select, create procedure, that kind of thing. It completes the statements for you, and all you have to do is to fill in the blanks. So the explain function, as I mentioned earlier, is the single most useful performance monitoring tool that Db2 possesses. In my opinion. It currently supports 23 tables of differing functions, the primary one being the plan table. So when you bind a plan or a package or a statement with the explain yes option, or if you issue the explain command in spoofy, these tables are populated with information about the access paths that the Db2 optimizer has chosen for each statement.
 
[00:38:22] – Marcus Davage (Guest Presenter)
This is sheer gold for performance analysis. Most vendors have built tools around this set of tables and their functions, and there’s plenty of presentations out there if you need to have the explain statement explained to you. So while we’re still on the developer, let’s talk about other ways of reducing CPU costs other than SQL. I’ve seen application performance improve merely by updating the compiler and keeping the compiler up to date. Even Db2 itself had its performance improved when IBM recompiled all of the source code of PLX to the latest compiler in COBOL. In the COBOL compiler, the ARCH option specifies the machine architecture for which the executable program instructions are to be generated. So for the z9 at the bottom, ARCH(7) provided 757 new assembler instructions; for the z196 ARCH(9)produced or introduced 132 new instructions. The z13 introduced 152 new instructions, and the z16 at top introduced 2020 new instructions. So if you specify a higher ARCH level, the compiler generates the code that uses newer and faster instructions. But your application might have bend if it runs on a processor with an architecture level lower than what you specify.
 
[00:40:28] – Marcus Davage (Guest Presenter)
With the ARCH option, use the ARCH level that matches the lowest machine architecture where your application runs. If you don’t specify ARCH, the default is ARCH(7) for COBOL 6.2, ARCH(8) for COBOL 6.3, and ARCH(10) for COBOL 6.4. So never ever use an ARCH level above the lowest level hardware in your environment, including your disaster recovery sites. So for example, if your disaster recovery site is a z13, then you must use ARCH(11) no matter what your primary site is, just in case you need to fail over and you don’t want your application to bending. So other compiler tips if you write in REXX, compile it. I’ve seen between five and 10% CPU savings if you ehigh-levelPCxecute compiled REXX. What about compiling off mainframe? There are vendor solutions out there that allow you to write in C or C++  or high level assembler on your PC. You compile on your PC using z compatible compilers or assemblers. Then you FTP the object and the DBRM to the mainframe, and then on the mainframe you link and bind. So you’re saving all of the compilation costs by doing on your PC.
 
[00:42:05] – Marcus Davage (Guest Presenter)
What other coding tips can IOffer? Well, horses for courses is the programming language you’re using appropriate for the application you are writing? You wouldn’t use high performance requirement transaction processing. You wouldn’t write that in REXX or Python or JAVA, or would you? We’ll talk about that in a minute. Read Uncle Bob Martin’s books on clean code reuse. Don’t repeat comment sensibly. Keep classes small. Use the distinctives of the language that you’re writing in, and don’t commit unnatural acts with them. Keep short code paths. For example, use add instead of compute. Not all instructions are created equal. The compute instruction consumes a lot more CPU than an add or a subtract. Use all the online resources available. Stack overflow is not particularly mainframe aware. Chat GPT is surprisingly good. GitHub, Copilot, and tab nine. They’ve all got code completion functionality and great AI specific tools that can be integrated into vs code. Some great extensions built by mainframers four mainframers in vs code, BMC, Broadcom, Rocket, IBM. There are other great vendor performance tools and coding tools around. Use them to analyze your code as it runs DBAs and sysprogs use them all the time, so why can’t developers?
 
[00:43:57] – Marcus Davage (Guest Presenter)
You can use development tools as well, like BMC’s AMI DevOps or Db2 SQL Assurance Sonarcube has got code smells and it’s rules based. The trouble is with Sonarcube’s rules based analysis, they’re written in JAVA, so you have to compile your JAVA rules into JAVA, bitcode or bytecode. Other automated testing tools are available other than BMC’s AMI DevX and DevX experience rest APIs can be reused if available, saves you writing code if somebody else has written it for you and surprisingly, JAVA compiles the bytecode and it always uses the latest ARCH of the architecture. It could be run on zip, which reduces software costs. Warning, if you do overflow zip capacity, the system may run the workload on a general processor. JAVA may run faster than COBOL on a subcapacity mainframe as zips are always run at full speed. Most mainframe performance monitors now support JAVA. There’s better garbage collection and Andrew Rowley has written a blog post on JAVA versus C drag racing on a Z processor and the results are quite surprising. So have a look at that. The next character is the DBA, and here she is, poor Jess, turning her sleep cycles into your uptime when I became a Db2 sysprog writing an assembler and racks, the hubris kicked in, as I said, and I looked down on dumb developers who were bringing my Db2 subsystem into disrepute.
 
[00:46:01] – Marcus Davage (Guest Presenter)
So I would like to approach my job with this new respect, with admiration of 20 years of Db2 performance tuning. So what are the challenges facing a Db2 DBA data growth? This is always a major challenge. DBAs have to increase have to deal with ever increasing volumes of data which require efficient storage management, indexing strategies, data archiving, backups, recoveries. Or maybe DBAs could naturally speak to the users, the business and say, do we need to archive? Do we need to delete security and compliance data security is always top priority. DBAs have to stay vigilant in implementing security measures. Managing user access, ensuring compliance with data protection regulations like GDPR and HIPAA cost management cost control is an ongoing challenge. Resource optimization, integration with new technologies cloud services, containerization, data analytics platforms. These all require DBAs to learn new skills. Vendor supports staying current with Db2 updates patches and vendor software data privacy and ethics DBAs have to navigate the ethical use of data, ensure the databases comply with evolving regulations, and make sure nobody’s touching production without pulling a proper key and without change control. Performance optimization always a constant concern, DBAs need to ensure the databases can handle workloads efficiently and respond to queries in a timely manner.
 
[00:48:11] – Marcus Davage (Guest Presenter)
Scheduling Reorgs, zero outage reorgs, is such a consideration in business. In business availability, disaster recovery, and once again, high availability. When was the last time you exercised a recovery scenario? DBAs have to plan for both planned and unplanned downtime scenarios to minimize business disruptions. Complexity mix of on prem cloud hybrid these heterogeneous environments that DBAs have to manage require a deep understanding of Db2’s capabilities. In context of the enterprise automation and DevOps, DBAs need to integrate database changes into these processes and workflows. I’ll talk about that in a minute. Skill shortages like many it roles, there’s a shortage of skilled Db2. DBAs organizations often struggle to find and retain qualified professionals. Migration and modernization are we moving Db2 off the mainframe? Are we moving workloads on the mainframe? DBAs should be involved in planning and executing these migration and modernization efforts. So what cost saving tips can we suggest for the DBA? Optimize query performance so tools like BMC’s aptune and SQL performance. Other software vendors are available continuously monitor and optimize SQL queries and query plans to reduce CPU and IO consumption. That’s what you pay vendors for. So if you don’t know how to use the software, get them to show you.
 
[00:50:26] – Marcus Davage (Guest Presenter)
I’ve personally seen savings CPU savings of 450,000 pounds a year just by rebinding packages. Some of those packages hadn’t been rebound since this was going to version eleven. It hadn’t been rebound since version six of Db2 data compression. I’ve talked about lobs. I’ve talked about compression by bringing lobs in line. You can use Db2 compression because lob compression isn’t as performant buffer pool tuning do you want to lock frequently accessed tables in their own buffer pools? Do you want to segregate applications? Do you want to segregate random access from sequential access? Do you want to dynamically alter the buffer pools based on differing workloads? I would always segregate table spaces from indexes, keep the catalog directory in buffer pool zero, keep sort work table spaces in its own buffer pool. And sometimes you can get better performance by moving data from four k to eight k pages. That reduces IO operations, especially if you’re using line lobs. And sometimes, depending on the size of the table, you could get better performance by reducing the size of the pages. Monitoring alerting set up proactive monitoring alerting systems to identify performance bottlenecks. Workload management make sure your WLM policies are appropriate.
 
[00:52:15] – Marcus Davage (Guest Presenter)
Reassess your store procedure policies automate as much as you can automatic reorgs based on real time stats saved half a million pounds of CPU bill. Try and shift left your database changes to developers. I’ll talk about this in a minute. Using DevOps orchestration tools with built in DDL and DML validation rules such as BMC’s DevOps for Db2 SQL assurance and schema standards, you can keep control of the rules, but the developers can do the job themselves and the DBA would know that any performance issues will be picked up before it hits production. I’ve seen a 3 million pound reduction in CPU costs merely by analyzing SQL before it hits production. Index management I’ve already talked about that. Work on your index strategies use include columns for frequently accessed data to reduce IO. I’ve seen tables with more than 13 indexes on, so my rule of thumb was between three and four. But your mileage may vary. Yeah, speak to the business. You must implement some kind of data archiving or purging strategies to maintain manage to manage data growth. Capacity planning speak to your performance and capacity people forecast future capacity. You need to plan for future upgrades or cloud migration or storage migration.
 
[00:54:18] – Marcus Davage (Guest Presenter)
Optimize backup and recovery. You need to streamline the process of backup and recovery. Regularly test your disaster recovery procedures. Always keep recovery JCL at the ready. If you’ve got vendor recovery management tools like BMC’s AMI recover, you can build recover sets of logically related tables and recover them. You could even image copy them in cabinets, which is a collection of image copies for related tables in a single file. You need to speak to the business to work out each application’s SLA. Do they vary? Do you need to image copy static reference data every day? Is daily incrementals and weekly folds sufficient? So finally, Ops or last but one, we’ve got Ops. What are the Ops challenges? So pretty much the same as everybody else’s. Maintaining legacy systems, resource management, CPU memory storage, high availability. This requires robust disaster recovery and failover solutions integration with modern technologies where does the cloud fit in? Have you got an on prem legacy application modernization? Some organizations seek to modernize their legacy mainframes by migrating them to newer platforms. This requires careful planning and execution, energy efficiency automation and DevOps security and compliance. Skilled workforce finding and retaining skilled mainframe operators is just as difficult as it is finding and maintaining skilled DBAs and developers cost management persistent concern capacity planning are we going to upgrade our mainframe?
 
[00:56:26] – Marcus Davage (Guest Presenter)
What operation involvement is needed vendor support and updates performance optimization so here’s some tips to help our beloved operators consolidate and virtualize. Explore opportunities to consolidate workloads onto fewer mainframes or implementing virtualization technologies to maximize resource utilization I’ve heard of hundreds of Linux servers, their workloads being consolidated onto a single Linux one mainframe, and even hundreds of Microsoft Exchange servers being converted to an open source mail and run on one IFL automation automate routine mainframe tasks and operations legacy application modernization move them to cost effective platforms or use modern development practices disaster recovery efficiency performance monitoring and tuning fine tune configurations to maximize efficiency DevOps automation tools I’ll talk about that in a minute. Application performance management your vendor will have software to help you manage application performance and collaboration. Collaborate with developers, other teams, and your DevOps consultant. Here he is. DevOps provides developers with a well defined process to deploy code from system to system. However, when there’s a database change involved, the process generally has to stop. It becomes a manual process of contacting the DBA through a ticket and then the DBA having to find time to determine what the developer is doing.
 
[00:58:40] – Marcus Davage (Guest Presenter)
What’s changed? Will it impact my system? Does it follow our standards? So why is the DBA a bottleneck? So this is where collaboration comes in. So here’s our developer. He’s happily coding away, making changes to applications. Oh, he needs a new column added to a table. So he asks the DBA. The DBA makes the change, and another change from another developer. He deploys the change to prod prod sends a prod reorg back to the DBA. Do I have time to deploy it, what with everybody else’s changes? I have to import, analyze, validate and execute against performance metrics, verify against site standards all of the production changes. I have to keep the system running. Plus I was called out for 4 hours last night, so you can see why the DBA is a bottleneck. So here is an agile DevOps developer delivering database changes to the DBA, and here’s a DBA being invited to an agile scrum. Not really. Sorry, this is a joke. DBAs are never invited to agile scrums, and therein lies the problem. I’ll bring this up so you can have a look at it for a minute. So the world I’m suggesting is that database changes from the app developer, they are thoroughly impact assessed using schema change management tools like BMC change Manager and the DB.
 
[01:00:31] – Marcus Davage (Guest Presenter)
Two table changes are encapsulated in the change orchestration process along with the code changes, the DDL. So your data definition language, your creates and your alters and your drops, they are validated against site standards that the DBAs have set. So the DBA codes these rules in a system like BMCME schema schema standards. In BMCME DevOps, the code statement is analyzed and compared against performance rules that DBA has made. So is there a table space scan? Is there a non matching index scan? Is there a select asterisk? All of these can be anticipated before the code hits production, and so the developer could then migrate all of their own changes through any environment up to, but obviously not including production without involvement of the DBA because the DBA has already written the rules that are driven by your orchestration tool. So conclusion the future is bright the future is mainframe Stuart Allsop, the editor of Infoworld, formerly known as PC News Weekly, said, I predict that the last mainframe will be unplugged on March 15, 1996, and then in February 2002. He said, it’s clear that corporate customers still like to have centrally controlled, very predictable, reliable computing systems, exactly the kind of systems that IBM specializes in.
 
[01:02:23] – Marcus Davage (Guest Presenter)
So the best explanation of the business value of a mainframe I have ever seen is a mainframe executive documented for her management the business benefit they receive from the mainframe 20 billion monthly Db2 transactions; 10 million monthly batch jobs; 79 uptime; 80 million lines of prod code; 1200 monthly production changes; 40 million Db2 utility jobs annually; disaster recovery a transition of 100% of their tier one applications in 4 hours; and a recover of a tier one production;Surveylong-termextra-large Db2 tables in 4 hours or less. Information from the BMC annual Mainframe survey 2021 shows that 92% of respondents see the mainframe as a long term platform for growth, and 72% of extra large shops have more than 50% of their data on the mainframe. Now, with IBM Z 16 announced in 2022, we’ve got quantum safe crypto hybrid cloud developments, AI enabled processor. We can process up to a trillion web transactions a day, 300 billion AI inference operations a day, all made possible through the new talent processes. Some more stats extra large shops enterprises with more than 50,000 mips their perception of the mainframe’s ability to grow and attract new workloads grew from 53% to 67% in 2023.
 
[01:04:19] – Marcus Davage (Guest Presenter)
While cost optimization remains near the top of the list, there was a significant increase of enhancing automation, jumping from 36% to 41%. Sorry, 46% 30% of maximizers those who are looking to increase their investment on the mainframe are investing in improving application development quality, while only 11% of minimizers, those planning to decrease their investment levels want to improve their up dev quality. And this final chart shows that 62% of all mainframe shops are adopting DevOps, but 71% of large shops between ten and 50,000 mips already are. So the future of the mainframe is looking brighter than ever. You can learn more@bmc.com slash AMI data my name’s Marcus David. Thank you very much for listening.
 
[01:05:34] – Amanda Hendley (Host, Planet Mainframe)
Thank you Marcus. Now we have some time to take some Q A. If anyone has any questions you can pop them in the chat or suppose you can come on camp like get that a couple minutes. There is a comment from Perry presentation. Yeah, thank you very much Perry. I did a screenshot of your last couple of slides with the stats on it. I think that’s some great data.
 
[01:06:12] – Marcus Davage (Guest Presenter)
If you just Google BMC mainframe survey 2021 and 2023, you should be able to get those results.
 
[01:06:20] – Amanda Hendley (Host, Planet Mainframe)
Oh great. Yeah, the BMC surveys are really, I’ve pulled from them quite a few times for pulse.
 
[01:06:33] – Marcus Davage (Guest Presenter)
Yeah, it’s pretty cool. It’s a good temperature gage.
 
[01:06:40] – Amanda Hendley (Host, Planet Mainframe)
Exactly. Well, I’m going to go ahead and share my screen. I’ve just got announcements as we I’m going to cut my camera too. I’m getting a notice access so I apologize. Maybe this will keep me a little more stable there. So that was driving down development dollars. We have a couple of articles and news announcements that we thought you might be interested in. One came out in November and it is about the cloud database offering for Db2. Thought you might get might be an interesting read for you. And these are QR codes. If you’re not familiar you can just use your phone to and then obviously links also in our newsletter. So if you’re not signed up for that, please do sign up@virtualusergroups.com. Second item, I want to invite you to join us at share in Orlando. In a couple of months I will be there with Planet mainframe and the virtual user groups. Love to meet with any of you that are going to be there as well. Can get a coffee, get a drink, or I’m going to be recording some on site video interviews. So if you are interested in doing that with me, give me a shout if you need to connect with me at any time.
 
[01:08:14] – Amanda Hendley (Host, Planet Mainframe)
You can reach me at ahendley@planetmainframe.com or through the virtual user groups page but hope to see you at share the job board. There is a new role posted for a principal technologist with Amtrak that is located in the US. And as always, we’re doing a planet mainframe call for contributors. So if you are interested in writing or presenting, you can do that through the site. Or if you just want to chat with me about it, you can reach out directly. Our social media links have changed in the past months, so make sure you’re staying in touch with us online. Our link has seen some growth in the past couple of months, so there might be some more activity and more folks there than you checked, if it’s been a minute. And as you probably know, we are posting videos up on our YouTube channel as well. And while you can see a lot of videos on the website, you can go back a very long way on the YouTube channel. Again, I want to thank our partner in Telemagic, and with that, I want you to save the date for our next meeting. March 19, same time, same place, and see you there.
 
[01:09:43] – Amanda Hendley (Host, Planet Mainframe)
Marcus, thank you. It’s always enjoyable to chat with you.
 
[01:09:46] – Marcus Davage (Guest Presenter)
It’s been a pleasure. Thanks, Amanda.
 
[01:09:48] – Amanda Hendley (Host, Planet Mainframe)
All right, y’all have a great one.

Upcoming Virtual Db2 Meetings

May 21, 2024

Virtual Db2 User Group Meeting

Can Db2 for z/OS be hacked?
Emil Kotrc, IBM Champion, Software Architech
Broadcom Software

Register here

July 16, 2024

Virtual Db2 User Group Meeting

Dale McInnis
IBM

Register Here