update readme

This commit is contained in:
41666 2022-12-09 13:59:20 -05:00
parent 05e30e4420
commit c89eb6ea74

View file

@ -4,31 +4,33 @@ PlanetSide 2 live population API. This API is free and open for anyone to use.
https://saerro.harasse.rs
Our methodology is to add any player ID seen on the Census websockets to a time-sorted set, and returning the number of player IDs seen within 15 minutes.
tl;dr: Watch for specific events, transform and add them to a timeseries set, and query that set for the last 15 minutes.
We're built on 3 core types, `players`, `classes`, and `vehicles`. Each can be filtered by Continent/Zone, Faction, and World.
---
The one and only goal of this app is to provide a current "point-in-time" population status for PlanetSide 2, per world, per faction, (and later, per continent.) Historical info is _not_ a goal; you may implement this on your end.
Please open an issue here or get in touch with Pomf (okano#0001) on the PS2 Discord if you have complex use cases for this data; it may be trivial/easy to implement APIs tailored to your needs.
The main use case is for [Medkit](https://github.com/kayteh/medkit2) bot to have an in-house source of population data, without relying too heavily on any third-party stats service, like Fisu, Honu, or Voidwell; which all have different population tracking needs and goals (and thus, different data.)
An example of how it can be used on [pstop](https://pstop.harasse.rs) ([GitHub](https://github.com/genudine/pstop)).
## Architecture
- Websocket processors
- A pair per PC, PS4US, PS4EU
- Connects to [wss://push.nanite-systems.net](https://nanite-systems.net) and Census Websocket
- Primary will connect to NS.
- Backup will connect to Census. It will wait for 60 seconds before deciding the primary is dead, and then start processing events.
- API
- GraphQL API
- Serves https://saerro.harasse.rs
- Built on axum and async-graphql
- Redis
- Using ZADD with score as timestamp, ZCOUNTBYSCORE by timestamp in 15 minute windows, and cleaned up with SCAN+ZREMBYSCORE, population data is tracked.
- There is deliberately no persistence.
- Redis "Tender"
- Cleans up Redis every 5 mins.
- Built on a "stacking filter" graph model, where each dimension adds a filter to lower dimensions.
- Event Streaming Service (ESS) Ingest
- WebSocket listening to https://push.nanite-systems.net (which is a resilient mirror to https://push.planetside2.com)
- Listens for `Death`, `VehicleDestroy`, and possibly other events
- Postgres with TimescaleDB
- Holds `players`, `vehicles`, `classes`, and also `analytics` tables as hypertables.
- Timescale makes this way too fast, mind-blowing :)
- Tasks
- Occasional jobs that prune the database past what we actually want to retain,
- Core data tables are kept to about 20 mins max of data, analytics to 1 week
- Can do database resets/migrations.
# Developing
@ -37,48 +39,27 @@ This app is built with Rust. You can set up a build environment via https://rust
To run,
```sh
# Start Redis/backing services
# Start backing services
docker compose up -d
# Start Websocket for PC
env \
WS_ADDR="wss://push.planetside2.com/streaming?environment=ps2&service-id=s:$SERVICE_ID" \
PAIR=pc \
ROLE=primary \
WORLDS=1,10,13,17,19,40 \
cargo run --bin websocket
# Run database migrations (required first step on a freshly up'd database)
cargo run --bin tasks migrate
# (Optional:) Start redundant websocket for PC
# Start NSS ingest. Use push.planetside2.com if NSS isn't quite working...
env \
WS_ADDR="wss://push.planetside2.com/streaming?environment=ps2&service-id=s:$SERVICE_ID" \
PAIR=pc \
ROLE=backup \
WORLDS=1,10,13,17,19,40 \
cargo run --bin websocket
# (Optional:) Start PS4US websocket
env \
WS_ADDR="wss://push.planetside2.com/streaming?environment=ps2ps4us&service-id=s:$SERVICE_ID" \
PAIR=ps4us \
WORLDS=1000 \
cargo run --bin websocket
# (Optional:) Start PS4EU websocket
env \
WS_ADDR="wss://push.planetside2.com/streaming?environment=ps2ps4eu&service-id=s:$SERVICE_ID" \
PAIR=ps4eu \
WORLDS=2000 \
WS_ADDR="wss://push.nanite-systems.net/streaming?environment=all&service-id=s:$SERVICE_ID" \
WORLDS=all
cargo run --bin websocket
# Start API
cargo run --bin api
# Run prune tool
cargo run --bin tools prune
cargo run --bin tasks prune
# Build containers
docker build . --build-arg SERVICE=api -t saerro:api
docker build . --build-arg SERVICE=tools -t saerro:tools
docker build . --build-arg SERVICE=tasks -t saerro:tasks
docker build . --build-arg SERVICE=websocket -t saerro:websocket
```
@ -94,4 +75,4 @@ Currently, the entire stack runs on Docker. You may deploy it to any server via:
docker compose up -d -f docker-compose.live.yaml
```
It listens on port 80, it's up to you from here.
It listens on port 80, it's up to you from here. Make sure to change passwords present in the file. It's not _that secret_ of data, but why risk it?