Tim Schaub built and ran a crawler on public STAC catalogs to better understand the current adoption of the standard. A couple of figures stand out:
About a third of the catalogs are static, but API-based catalogs provide access to disproportionally more items and assets.
The vast majority of catalogs already implement STAC 1.0.0, which was only finalised in May.
Almost half of all items across all catalogs are not satellite images or other spatial data; they are additional meta represented in various formats such as JSON, XML or HTML.
Jochen Topf has published a the preliminary report on the shortcomings of OpenStreetMap’s data model. It’s a very very in-the-weeds document, providing an in-depth discussion of the current state of OSM’s data model and ways to improve and future-proof it.
The report vets the data model against the backdrop of anticipated growth of the OSM data set and its implications for data processing, it looks at the lack of a native polygon geometry type (a classic!), limitations related to mapping large objects or and those with fuzzy boundaries, its incompatibility with standard GIS software, and many others. Topf also suggests solutions addressing some of the problems, including removing untagged nodes, introducing an area type, limiting the length of tag values, changing the database management system, and offering different data formats via the API and for exports.
How many of the proposed changes will be implanted remains to be seen, Topf himself is cautious:
I am not proposing any action or, at most, minor steps. This is not because those are not important issues, but because I cannot see a clear path to any improvements. Often the goals are too vague to be actionable.
Implementing just some of the proposed changes would be if big lift, every tool interacting with the OSM data and API will be affected; every editor, every command-line script that coverts data, every export tool. It would require constant engagement with the community and strong technical leadership.
Jacob Hall wrote a recap about how he mapped his campus at William & Mary:
The most rewarding part of this project was getting to engage with community members in and around campus that I otherwise would have never met.
One of the positive effects of going out and mapping an area, especially when done with such determination, is that you get to know your neighbourhood and its community in intricate detail. At the moment, there is probably no other person in the world who knows more about the William & Mary university campus than Jacob Hall.
The Guardian is hosting an interactive workshop exploring the evolution of Cartography.
[Y]ou will discover how maps, and our relationship to them, have evolved over time. You will learn how the way that a map is designed can influence the way in which it is interpreted, and why this means that even the most authoritative map may not be as objective as we think.
You will also draw on your new understanding of cartography to create your own geographic data, and will touch on how to successfully display geographic data to tell a story, and how geo data visualisation has evolved and influenced modern-day map techniques.
The event is taking place on 26 October, from 5pm to 8pm (BST), and is led by Jess Baker and Paul Naylor, both work at Ordnance Survey. It’s an online event, so you can dial in from anywhere in the world.
[MapLibre] is Amazon Location Service’s recommended map renderer and forms the basis of AWS Amplify Geo’s map display and geocoding capabilities. We’ve been excited to see the project grow since it launched in 2020 and look forward to continuing our work with the MapLibre Organization.
MapboxGL’s license change happened in December 2020. Amazon’s location services debuted only a few days later. MapLibre was forked from MapboxGL after the license change. Is anyone still wondering why Mapbox changed the license for Mapbox GL?
I only occasionally contribute to OpenStreetMap, mainly from the comfort of my desk and rarely on the go. I almost exclusively add and edit Points of Interest when I’m out. I used Go Map!! before, but it didn’t stick. In dense areas like central London, too many features are displayed in the editor. You see points for traffic lights, intersections, crossings, bins, and shops – all at once. Understanding what features exist or need to be added often requires clicking individual points to identify what they are.
Every Door is a new mobile OpenStreetMap editor, built by Ilja Zverev and available for iOS and Android. And it takes a different approach to edit OSM on a mobile phone.
Every Door focuses on fewer things at a time. You edit amenities, street furniture or building entrances and house numbers — but never all at the same time. You pick one group, see what’s already mapped around you and can only edit and add the same feature types. And instead of showing you all of the existing features in the current map view, it downloads just a few closest to your current location. Every Door nicely caters to the way many mappers edit OpenStreetMap. They focus on one goal at a time, say to map all the rubbish bins in a park, and then just work on that until they’re done. And they map the objects closest in proximity.
A few well-designed features make editing points of interest a breeze. Entering opening hours is a pain in iD, but it’s straightforward in Every Door thanks to a neat interface to select days and times, which doesn’t require composing a long string, hoping it matches the pattern OSM expects. Every Door also caches selected tags for feature types so I can quickly whizz through a park and map all benches that look the same and share the same attributes. All it takes is a brief stop next to one to get a decent GPS signal.
The interface could be more polished, and some interactions aren’t intuitive. But Every Door is a cross-OS app built by one person, presumably in their spare time. I won’t expect this to look like a boutique iOS app that costs 75$ a year. Every Door is a nice app, which takes away much of the complexity of editing OpenStreetMap on the go.
Just in time for FOSS4G next week and the annual OpenLayers Feature Frenzy, the OpenLayers team has released 7.0.0, a new major version. Over 90 pull requests went into this release, but two changes stand out: Internet Explorer is no longer supported. And WebGL rendering has been extended to support lines and polygons alongside points.
A renderer converts geo-data into data structures that browsers can render. In the past, OpenLayers has primarily relied on the Canvas API to render vector data. Compared to Canvas, WebGL is considered more performant, especially when visualising complex geometries or large datasets.
Adding more WebGL-rendering capabilities to OpenLayers has been an ongoing effort in the last few years, and there’s more to come:
The rendering API is still low level and experimental. Future releases will include a higher level styling API.
Technically this is a breaking change, but upgrading should be straightforward, according to the release notes:
[W]e changed the signature for a number of methods on a helper class that had been marked as part of the API in 6.x releases. While this is technically a breaking change, it is unlikely that applications were using this helper class, so upgrades should be straightforward.
Chartographer [is] a visualization tool that breaks down different stylesheet properties by layer and zoom level for easy analysis and debugging. Now instead of panning and zooming around the map to find and identify issues, or scrolling through thousands of lines of JSON looking for mismatched zoom numbers, you can visualize how layers are styled at all zoom levels in a single view.
Chartographer looks like a handy tool if you’re hand-crafting map stylesheets.
I somehow missed this: Bertin.js has reached a significant milestone with its 1.0 release.
Bertin.js is a JavaScript library that simplifies creating static thematic maps in SVG, primarily for print, online publications or presentations. You can certainly add a level of interactivity, but Bertin.js isn’t designed to replace, or compete with, interactive-mapping libraries like Leaflet, OpenLayers, or MaplibreGL.
Bertin.js provides a comprehensive API to make thematic maps using classification, cartograms and tooling to add map components, such as titles, footers, legends, or graticules. It is a wrapper around D3’s powerful mapping capabilities, providing reasonable defaults so you can focus on designing your map and don’t have sweat intricate details such as creating a legend and specifying individual pixel coordinates to perfectly align symbols and labels.
Following last week’s post explaining the tile-addressing schema in PMTiles version 3, Brandon Liu now discusses its compression approach reducing the disk space required to store a global tile dataset to only 91.6 MB.
Update: Brandon clarified on Twitter, the 91.6 MB mentioned above only holds the data required to map Z,X,Y coordinates to the corresponding Tile ID. The actual global map data set is 80GB. (15 August 2022)
Kyle Barron demonstrates how to use deck.gl to render data in GeoArrow format, starting with a GeoJSON dataset of one million building footprints in Utah.
We’ve been able to make web maps with GeoJSON data for some time now, and converting GeoJSON to GeoArrow and preparing the data for deck.gl requires extra development work, so why would you want to use GeoArrow? The short answer: It’s incredibly fast.
GeoArrow overlaps almost exactly with the format that deck.gl expects! So deck.gl can render from GeoArrow’s physical representation very efficiently. For point and linestring geometry types, the underlying coordinates array can essentially be copied directly to the GPU with no CPU processing required. For polygon geometries, only polygon tessellation still needs to happen on the CPU.
We’re looking at the not-so-distant future of web mapping here, when we can render millions of features onto a web map without a noticeable impact on performance.
A new version of MapLibre was released just yesterday. Release 2.2.0 is a minor version in semantic-versioning speak, but it adds a major feature: 3D terrain maps, enabling developers to visualise the topography of land surfaces in interactive maps on the Web.
Mapbox introduced a similar feature to Mapbox GL 2.0.0, the first of the library’s releases after moving away from an open-source license. MapLibre is a fork of Mapbox GL that was created to preserve and continue the work under an open-source license. Thanks to the efforts of MapLibre maintainers, developers can now build interactive 3D maps for the Web using open-source technology.
I’ve played around with a small example based on an example by Oliver Wipfli, and the results look really slick. Some minor issues still exist, for example, the hill shade disappearing when you tilt the map close to a horizontal view.
It’s great to see the project under such active development. Mapbox has paved the way for modern web mapping with vector tiles, but it’s good to have more competition in the space — even better when the competition is open-source.
PMTiles version 3 introduces a new tile-addressing schema. Instead of using Z,X,Y tile coordinates, the new schema uses tile IDs based on the tile’s position within a series of Hilbert curves:
The TileId 36052 corresponds to the Z,X,Y position of 8,40,87. The calculation of ID uses a pyramid of Hilbert curves starting at TileId=0 for zoom level 0. The next zoom level, a 2x2 square, occupies the next four IDs in the ID space TileId=(1,2,3,4), the next level being the next 16 IDs, and so on.
And to not duplicate tiles that contain virtually no information (for example, tiles just showing water) the RunLength indicates how many times a tile will be repeated within the Hilbert curve, so vast areas of the ocean can be represented with just one tile.
Ocean tiles are not only repetitive, but sparse and often contiguous in Hilbert space. This entry:
means that the 44 byte vector tile with a single square in the layer ocean is repeated over 100,000 times, starting at Z,X,Y=11,285,1311 and ending at 11,19,1304.
Neat.
Update: A new post outlines the disk layout and compression approach of PMTiles version 3. (15 August 2022)
Outreachy is an internship program designed for young people to make their first mark in open-source software development, specifically people from underrepresented groups in the tech industry. (So, if you’re a European white dude, you don’t need to apply.) The internships come with a $7,000 stipend for three months and are fully remote.
The mentoring projects for the next round will only be announced at the end of September. Still, it usually includes projects with a data-collection and data-management focus, some with a geospatial element. The most recently completed round included interns at Ushahidi and ODK-X.
The program is an excellent opportunity to get into open source and add a fantastic project to your portfolio. I’ve mentored Outreachy interns before, and some went to build successful careers at big names such as RedHat and Google. (Obviously, because my former mentees are smart and driven software engineers and not because I’m a particularly great mentor.)