Tomislav Jerković

About Me

I'm Tomi.

I'm a curious person. I like to understand what's happening and how it's happening.

Why are things as they are? Could they've been different? Could they've been better? Can we make them better?

I'm not afraid to learn - even if I learn, that I've been wrong. Or if it takes time to learn.

I find joy in learning.

I also find joy in my surroundings. E.g. the work place is a social setting and I like if we get along and get things done. And I put in the work to get us there.

Interests / Skills / Competencies

I'm a self organizer, a learner, a (self) motivator. I coach teams. Sometimes I write code - it does the trick.

Besides that, I'm many more things - but let's keep this page compact.

Experience

2018 - today

BMW

Apart from the usual scrum master / agile coach stuff, I like to embark on side quests - projects and undertakings, that teach me something about the organization I'm in, technical topics and so on. Here's an excerpt, with the most recent stuff at the top ...

The propagation of (fix)versions across multiple JIRA projects

We have to have multiple JIRA projects (10+), which all deliver onto the same platform. To indicate what issue will be shipped in which delivery, we use the "fix version" attribute.

So far so good.

Unfortunately you have to create every (fix)version in every project separately - which is tedious and error prone if your versions contain lots of information encoded into their names. All versions describing the same delivery have to have exactly the same name - otherwise you'll have hard time filtering for those version's issues.

So what do you do? You write a little script that scratches that itch ...

It's nothing particularly extraordinary. Why was it still an interesting endevour?

Well, there's no data storage to e.g. save a mapping table of "source versions" and those derived from them and created in the projects. So you append an ID to a version's description, which can be used to match it to a source. Then you have to parse the name of the version to extract the various pieces of information encoded - to compare it to the source and decide if an update is necessary or not.

After that you work on making it robust and set up tests - "little helpers" like this tend to do a lot of damage really quickly if they go off the rails.

And then you test and rework error cases - to prevent e.g. data loss if something goes wrong etc.

Bonus: as a kind of self-service I set up a little web page, so the colleagues in charge of maintaining the release / version plan can trigger this propagation. This is then executed with a delay of 30mins max, so that you don't run the risk of triggering multiple runs in parallel - next to creating unnecessary load on the servers, you'd also run the risk of data locks and script runs aborting because they cannot lock a resource for writes.

A nice little project! (And it saves us money... nobody should have to do this manually.)

Migrating the complete product backlog to another tool - all while working towards the product launch

Sometimes you'll find yourself in a situation, where the tool you use for you backlog just doesn't cut it (any more). So you pick something you think is better suited for the job.

It's probably easiest to start from scratch in the new tool. However, this is not always possible. Then you have to migrate content. In this case for an organization of approx. 1K people. And since you cannot close the whole thing down for three days, you do it all while they're working on, in and with it.

So you set up your tables to keep track of all issues migrated - their old and new IDs (so you can use that as a lookup for recreating the structure, i.e. link stories to epics, epics to sagas). You create a mapping for all the attributes - what field in the old tool is corresponds to what field in the new one etc.

And then you migrate.

We decided to migrate team by team or a couple of teams at once - that way if something happened, there would's be too many people affected and we could closely support them to fix any issues.

We also decided to not create all issues in one (the first) pass and then link them in a second one. We rather went through the structure level by level, starting from the top: we created all the sagas for a team, storing their new IDs - or False, if they couldn't be created - next to their old IDs. That way, when we did the epics, we could check if the corresponding sagas had been created. If not, we would skip the epics as well (storing False) and proceed. The same applied for the story level.

Doing this made sure we did not create a backlog structure that had holes: stories dangling around without their epics etc. And if we wanted to do a second attempt, we could re-use the input data: the script would only try to create issues that had not been created, yet, omitting the ones that already existed, but using their IDs if a reference to them was encountered anywhere.

This allowed us to quickly try again once the issues, that prevented the creation of an issue, were fixed. It was the reason why we could migrate every batch with basically zero downtime for the teams. It also made sure we had next to no clean up efforts - you don't have to clean up things you don't mess up.

This, again, is not rocket science, but it made a difference for the teams - they did not have any trouble switching the tool rather late in the project. It made it even easier for them, since we could help them set up the new tool for their daily use. This close contact with and support for the teams is what made the project. And I must say, I enjoyed it.

2009 - 2018
GMX / 1&1
2003 - 2009
University of Würzburg

... writing is hard ... will add details later