Brian Timoney contemplates why talented GIS professionals seek employment in other areas, tracing the problem to broad scoped roles and lower pay compared to similar roles in other industries.
Even discounting the vagaries of job titles, the skew in the distribution of GIS Analyst salaries is notable because it implies a stagnant middle grinding away while effectively blocking the ability of new entrants to rapidly ascend the wage scale as you’d find in more “normal” distributions
I have no numbers to back this up, but my gut feeling is that most GIS Analysts work in civil service. Salaries in civil service are tightly regulated; you don’t just negotiate a higher pay unless your department has a role in a higher salary band and you land that job. That might explain why we’re seeing this skew towards lower salaries and the limited upward mobility.
That said, Brian’s point is correct: GIS-specific roles are often too broadly scoped and underpaid.
How hard can it be to make two simple maps, one showing the location of addresses and one showing sales by US state? James Killick tried products from all the big names—ESRI, Google, Microsoft, and Felt. Turns out, getting started is not straightforward.
Killick went into the experiment pretending he had no prior experience, which I think is unfair. Complex software is a reflection of a complex problem space, great flexibility, or both. Not everything can and should be dumbed down to the level a disinterested teenager can be bothered to understand. Instagram is easier to use than a traditional camera but the photos all look the same. Mapping software, like any design tool, requires domain knowledge: You need to know what you want to achieve. You need to know what kind of maps exists, and which can be used to most effectively represent your data. If you know these things you’re more likely to already know the right tools and where to find them.
And let’s not forget Felt is just over one year old now but they already raised the bar for map-tech user experience and managed to remove a lot of complexity from the process through clever design and impressive software engineering. Give them a little more time and they will further change the way we think about making maps. In a few years time we might ask ourselves why map-making was so difficult in 2023.
However, map tiles are the only way to create a seamless, smooth and multi-scale user experience for large planet-scale geo-data. Even as devices become more powerful, the detail and volume of geo data grows commensurately. The boundless supply of rich data, combined with the demand for smooth, instant-loading maps, means tiling will always remain an essential part of the digital mapmaker’s toolkit.
Kyle Barron muses different approaches for bringing high-performance geometry libraries written in C/C++ and Rust to the to the Web, using WebAssembly.
It’s my belief that for any project beyond a certain complexity, there should only be three core implementations:
One in C/C++ because C/C++ is today’s de-facto performance-critical language, and it can bind to almost any other language.
One in Rust because removing memory errors brings so much potential and Rust’s ergonomics bring impressive development velocity to low-level code. I believe it’s tomorrow’s performance-critical language.
One in Java because the Java Virtual Machine makes it hard to interface with external C libraries (and it’s yesterday’s performance-critical language?).
The best code is the code that is never written, or so that say. Turf has served my modest needs well in the past, but something as fundamental as geometry operations doesn’t need to be rewritten if we have battle-tested libraries in other languages that we can bind with WebAssembly yielding similar, often better, performance. The JavaScript world has a weird habit of reinventing the wheel, solving the same problems with slightly different approaches. We end up with lots and lots of software that basically does the same thing—having fewer, but more stable options, would be a good thing.
Tom MacWright, after joining val.town, reflects on building Placemark. It’s an honest account of what it’s like to build and grow a business—something we don’t see very often.
Placemark will live, but in what form isn’t entirely clear:
I’ve envisioned it as a tool that you can use for simple things but can grow into a tool you use professionally or semi-professionally, but maybe that’s not the future: the future is Canva, not Illustrator.
I’ve been wondering how the announcement of Felt, which happened around the same time as Placemark’s, would affect Placemark’s future. Felt has venture capital, a team of smart people, and a lot of buzz, whilst Placemark is a bootstrapped one-man show.
How did we get to the point where there’s a need for a consortium aiming to standardise road map data? And what motivates big names like AWS, Microsoft, Meta, and TomTom to join forces?
James Killick has answers: Several providers are all building their own version of the same product leading to a fractured landscape for map data. Some providers are now looking to lower the cost of producing their data, assert control over how open data is created, and improve interoperability between data sources.
James Killick on the problem of geodata standardisation:
The lack of common, broadly adopted geospatial data exchange standards is crippling the geospatial industry. It’s a bit like going to an EV charger with your shiny new electric vehicle and discovering you can’t charge it because your car has a different connector to the one used by the EV charger. The electricity is there and ready to be sucked up and used, but, sorry — your vehicle can’t consume it unless you miraculously come up with a magical adaptor that allows the energy to flow.
Standards exist for public-transport information but are missing for many other types of geodata. The commercial premise for these domains is different.
For public-transport organisations, their data is not the product. Trains and buses moving through a city are. Network and schedule data is a means to get more people to use public transport, so you want to get this information in front of as many people as possible—through displays on stations, a website or third-party applications. And you want to integrate with other transport authorities’ data to provide a seamless service. All this is best accomplished through shared interfaces and data models.
On the other hand, road-network and address data isn’t a vehicle to sell a product; it usually is the product. You license it because you offer a service (delivery, navigation) that requires this information. The companies providing that data often survey and maintain the data themselves. The idea that you could swap out or merge their data with someone else’s using the routines and data models you already build is a threat to that business model. They don’t want interoperability; they want lock-in, so you keep paying them, not somebody else.
Jonathan Crowe, writing on The Map Room, has a better understanding than I had of TomTom’s new Map platform:
TomTom plans to do so by combining map data from its own data, third-party sources, sensor data, and OpenStreetMap. I’ve been around long enough to know that combining disparate map data sources is neither trivial nor easy. It’s also very labour intensive. TomTom says they’ll be using AI and machine learning to automate that process. It’ll be a real accomplishment if they can make it work. It may actually be a very big deal. I suspect it may also be the only way to make this platform remotely any good and financially viable at the same time.
This sounds very ambitious. Automated data fusion has been a popular research topic amongst PhD students for years. Maybe TomTom will be the first organisation to create a viable product this way; who knows?
Related to yesterday’s post: Lat × Long reader DoudouOSM pointed to an interview with OpenStreetMap founder Steve Coast on the Minds Behind Maps podcast. Steve talks about the future of maps anticipating they will disappear from our apps into the background and that we will interact with geographic information much less.
The whole interview is worth watching but beware, it’s three hours long! The relevant bits here start about 20 minutes in.
James Killick, over at Map Happenings, contemplates whether we’re witnessing the end of consumer maps:
It’s all part of a trend, a downward trend in my opinion, that will result demise of consumer maps. Contrary to Beck’s approach to distill reality into its essential essence we’re moving in the opposite direction.
We are instead on a path to the dreaded metaverse, a virtual world where we should all be thankful and glad to wander around as legless avatars with the aspirational goal of reaching social media nirvana. I don’t know about you, but, ugh.
Sure, Zuck wants us all to stay home and spend all our money inside his multi-player game instead of going on holidays and exploring places.
But no matter what, we’ll continue to go places, and navigating unfamiliar territory will always involve maps. These maps will look different from what we’re using today. More real-time information will be involved, more data capturing sentiments and our phone cameras will play a vital role.
Is it really that bad if future maps don’t resemble those made by Harry Beck or the Ordnance Survey in the olden days? I don’t think so; it’s called progress. I remember arriving in London almost ten years ago. Citymapper was a godsend. Even though you rarely ever looked at a map, it made this humongous city approachable to a boy from a small-ish town in East Germany.
Whether future solutions can be called maps as defined by the National Geographic Society doesn’t matter. Whether we old people like the look of digital way-finding tools doesn’t matter either. What matters is that they make cities easier to explore and navigate for the majority of people.
Greg Miller, writing for Wired Magazine, in a portrait of Cynthia Brewer, of Colorbrewer fame:
Brewer’s influence on cartography is far-ranging. Others have imitated her approach, developing a TypeBrewer and a Map Symbol Brewer. She’s seen her color schemes in everything from financial charts to brain imaging studies.
It’s a portrait in one of the most renowned technology publications of a university professor working on a rather niche subject — goes to show how much influence Brewer’s work has on our craft.
Jacob Hall wrote a recap about how he mapped his campus at William & Mary:
The most rewarding part of this project was getting to engage with community members in and around campus that I otherwise would have never met.
One of the positive effects of going out and mapping an area, especially when done with such determination, is that you get to know your neighbourhood and its community in intricate detail. At the moment, there is probably no other person in the world who knows more about the William & Mary university campus than Jacob Hall.
Bill Dollins reflects on the value of industry standards after working with proprietary product APIs:
In the geospatial field, the work of OGC gives us a bit more shared understanding. Because of the Simple Features Specification, we have GeoJSON, GML, GeoPackage, and various similar implementations across multiple open-source and proprietary database systems and data warehouses. Each of those implementations has benefits and shortcomings, but their common root shortens the time to productivity with each. The same can be said of interfaces, such as WxS. I have often been critical of WxS, but, for all the inefficiencies across the various specs, they do provide a level of predictability across implementing technologies which frees a developer to focus on higher-level issues.
OGC’s W*S specifications (e.g., WMS, WFS, or WCS) share similar features. Each provides a getCapabilities operation advertising the service’s — well — capabilities and operations to access the service’s items (getMap, getFeature, or getCoverage). The precise parameters required to execute the requests do vary, and so do server responses, but a good understanding of one specification can be transferred to other similar specifications.
The same flexibility and predictability in built into newer standards today, like OGC API - Features, and community specifications like STAC — both share the same foundation. OGC’s processes may be slow, and the specifications may not make for an entertaining read but its diligent process leads to predictable API design, enabling service and client developers to implement applications consistently and predictably.
You appreciate that more once you had the pleasure to build a service against the Salesforce API.
Saman Bemel Benrud, previously a designer at Mapbox, reflects on his time at the company. It’s a tale of what happens when a company accepts big VC money. The priorities shift from solving relevant problems to making money.
Even if you’re a lowly designer or engineer, you must understand what your company needs to do to be sustainable. It very likely is different from what they’re doing now, and may come with unexpected ethical compromises.
What other choices do companies have when they build geo-data products and compete with Google? Maybe they can grow slower, don’t sell solutions that aren’t yet available, involve employees in decisions, or accept and support unionisation efforts. The company still has to make money, but it might feel different to the people building the product. There must be a way to build a sustainable business that doesn’t involve VC funding.
Tom MacWright explores whether newer geo-data formats, like FlatGeobuf, Zarr, GeoParquet, Arrow, or COGs, are useful for applications making frequent updates to the data.
The post dives deep into some of the characteristics of these data formats, including compression, random access, and random writes, and concludes that they are optimised for reading data and that the benefits for writes are limited:
I like these new formats and I’ll support them, but do they benefit a usecase like Placemark ? If you’re on the web, and have data that you expect to update pretty often, are there wins to be had with new geospatial formats? I’m not sure.