Repo Operations
This is the single source for build/test commands.
Repo-wide
bazel build //...
bazel test //...
bazel build //apps/specifications/dafny:verify
scripts/dev.sh now creates /tmp/bazel-sandbox automatically for local Bazel sandbox compatibility.
Apps
bazel build //apps/...
bazel test //apps/...
Microservices
bazel build //apps/microservices/...
bazel test //apps/microservices/...
Per-service binaries, tests, and images (6 services):
bazel build //apps/microservices/merchant-api:core
bazel test //apps/microservices/merchant-api:core_test
bazel test //apps/microservices/merchant-api:merchant_controller_endpoint_unit_coverage_test
bazel build //apps/microservices/merchant-api:image
bazel build //apps/microservices/merchant-api:image_tarball
bazel build //apps/microservices/management-api:core
bazel test //apps/microservices/management-api:management_controller_endpoint_unit_coverage_test
bazel test //apps/microservices/management-api:management_controller_endpoint_unit_coverage_test
bazel test //apps/microservices/management-api:health_controller_test
bazel test //apps/microservices/management-api:notification_service_test
bazel test //apps/microservices/management-api:terminal_service_smoke_test
bazel build //apps/microservices/management-api:image
bazel build //apps/microservices/management-api:image_tarball
bazel build //apps/microservices/status:core
bazel test //apps/microservices/status:untested_surface_coverage_test
bazel build //apps/microservices/status:image
bazel build //apps/microservices/status:image_tarball
bazel build //apps/microservices/terminal-api:core
bazel test //apps/microservices/terminal-api:core_test
bazel build //apps/microservices/terminal-api:image
bazel build //apps/microservices/terminal-api:image_tarball
bazel build //apps/microservices/terminal-onboarding:core
bazel test //apps/microservices/terminal-onboarding:core_test
bazel build //apps/microservices/terminal-onboarding:image
bazel build //apps/microservices/terminal-onboarding:image_tarball
bazel build //apps/microservices/tx-bundler:core
bazel test //apps/microservices/tx-bundler:core_test
bazel build //apps/microservices/tx-bundler:image
bazel build //apps/microservices/tx-bundler:image_tarball
### Status service operational notes
The status service (`apps/microservices/status`) is an internal health-check
aggregator deployed to Cloud Run as `pinpointpos-status`. It exposes two
identical endpoints (`GET /` and `GET /health`) that poll the Cloud Run Admin
API for the five other services and return a combined JSON status.
**Monitored services:** merchant-api, management-api, terminal-api,
terminal-onboarding, tx-bundler.
**Environment variables:**
| Variable | Required | Default | Description |
| ---------------------- | -------- | ---------- | -------------------------------- |
| `GOOGLE_CLOUD_PROJECT` | yes | — | GCP project ID for Cloud Run API |
| `GOOGLE_CLOUD_REGION` | no | `us-east1` | Region to query |
When `GOOGLE_CLOUD_PROJECT` is unset the service returns `"status": "healthy"`
with `"checks": "skipped"`. Authentication uses Application Default Credentials
(ADC) with `cloud-platform` scope.
**Response shape:**
```json
{
"success": true,
"data": {
"status": "healthy | degraded",
"services": {
"merchant-api": "healthy | unhealthy",
"management-api": "healthy | unhealthy",
"terminal-api": "healthy | unhealthy",
"terminal-onboarding": "healthy | unhealthy",
"tx-bundler": "healthy | unhealthy"
}
}
}
```
The `pinpointpos-` prefix is stripped from service names in the response.
---
## Websites
Build/test commands are covered here, while dev/preview/serve details live in
`apps/websites/agents.md` (customer, support, main-website).
```bash
bazel build //apps/websites/...
bazel test //apps/websites/...
bazel test //apps/websites/customer:lint
bazel test //apps/websites/support:lint
Playwright behavior integration tests
Playwright behavior tests verify real end-user flows trigger successful backend
calls and complete in the UI.
These are a temporary exception to the Bazel-first flow until dedicated Bazel
targets/wrappers are added for Playwright.
Set E2E_EMAIL and E2E_PASSWORD when running them directly; the pre-commit
hook auto-sets local dev defaults if these are unset.
cd apps/websites/customer
pnpm exec playwright test --list
pnpm exec playwright test e2e/customers.spec.ts
cd ../support
pnpm exec playwright test --list
pnpm exec playwright test e2e/features.spec.ts e2e/terminals.spec.ts
For a current endpoint gap inventory, see docs/docs/playwright-behavior-coverage-gaps.md.
Android
bazel build --config=android_arm64 //apps/android:android_app
bazel build --config=android_x86_64 //apps/android:android_app
bazel test --config=android_x86_64 //apps/android/app/src/androidTest:android_app_instrumentation_test
Android build cache issues
If you see an error like:
.../rules_android+/tools/jdk/bootclasspath_android_only_system doesn't exist
clean Bazel's output tree and rebuild:
bazel clean --expunge
bazel build --config=android_arm64 //apps/android:android_app
This typically indicates a corrupted or missing generated bootclasspath in the Bazel cache.
IntelliJ / BazelBSP sync errors
If IntelliJ fails during sync with:
.../rules_android+/tools/jdk/bootclasspath_android_only_system doesn't exist
the IDE is usually using a different --output_base (its own Bazel cache).
Fix:
- Find the output base from the error path (e.g.
/home/you/.cache/bazel/_bazel_you/...). - Run:
bazel shutdown
bazel --output_base=/path/from/error clean --expunge
bazel --output_base=/path/from/error build @rules_android//tools/jdk:bootclasspath_android_only
- Re-sync the Bazel project in IntelliJ.
Optional: Set a dedicated output base for IntelliJ in Bazel settings, e.g.
--output_base=$HOME/.cache/bazel-intellij, and clean that path when needed.
Android Production Signing
This section documents the end-to-end Android production release signing workflow.
The production keystore is custodied in GCP Secret Manager (see apps/android/docs/keystore-custody.md).
How CI signs releases
CI fetches the production keystore from Secret Manager on non-PR events.
In .github/workflows/android.yml:
- Step: Fetch prod keystore (only when
github.event_name != 'pull_request')- Uses:
google-github-actions/secret-manager@v2 - Fetches:
projects/pinpointpos/secrets/android-prod-keystore/versions/latest
- Uses:
- Step: Write prod keystore file
- Decodes the base64 secret payload and writes it to:
apps/android/prod.keystore
- Decodes the base64 secret payload and writes it to:
For production release artifacts, build the production target (which is configured to use apps/android/prod.keystore for signing):
bazel build --config=android_arm64 //apps/android:android_app_prod
Manual signing override (hotfix / local release)
Prereqs:
- You must have access to Secret Manager secret
android-prod-keystorein projectpinpointpos. - Follow the custody and two-person controls described in
apps/android/docs/keystore-custody.md.
Fetch the keystore and write it to the expected path (do not print secret contents):
mkdir -p apps/android
gcloud secrets versions access latest \
--project=pinpointpos \
--secret=android-prod-keystore \
| base64 -d > apps/android/prod.keystore
chmod 600 apps/android/prod.keystore
Then build the production artifact:
bazel build --config=android_arm64 //apps/android:android_app_prod
After the build, securely remove the file (and ensure it never ends up in shell history/log output).
Guardrail (non-negotiable)
- Never use
apps/android/dev.keystorefor production releases. - Never commit
apps/android/prod.keystore(it is intentionally gitignored). - Keep the keystore on disk only for the shortest practical time (CI should treat it as a temporary build input).
- Never print the base64 secret payload (or any derived bytes) to logs.
Key generation (if a new keystore is required)
Generate a new keystore (alias must match what the build/signing expects):
# Bazel's android_binary(debug_key=...) expects alias 'androiddebugkey' and password 'android'
keytool -genkey -v -keystore apps/android/prod.keystore -alias androiddebugkey -storepass android -keypass android -keyalg RSA -keysize 4096 -validity 10000
Upload the resulting keystore into Secret Manager as a new version of android-prod-keystore (custody/format details in apps/android/docs/keystore-custody.md).
Rotation / compromise response
For rotation, rollback, and compromise response procedures (including IAM controls, versioning strategy, and incident steps), follow:
apps/android/docs/keystore-custody.md
Key reference points:
- Keystore secret:
android-prod-keystore(projects/pinpointpos/secrets/android-prod-keystore/versions/latest) - CI access:
github-actions-android@pinpointpos.iam.gserviceaccount.com(scopedroles/secretmanager.secretAccessoron the secret resource)
Libraries
bazel build //libs/...
bazel test //libs/...
Tools
bazel build //tools/...
bazel test //tools/...
Specs
bazel build //apps/specifications/...
bazel build //apps/specifications/dafny:verify
Device Management & Fleet Operations
Fleet monitoring
The support portal provides a fleet dashboard at /fleet that shows all terminals
across an organization with live status, diagnostics, and geofence compliance.
Data flows from the Android app via periodic heartbeats:
- Android app sends
POST /v1/terminal/heartbeat(via terminal-api, mTLS) every 60 seconds - terminal-api writes diagnostics to the
terminal_diagnosticsSpanner table - Fleet API endpoints (
/api/v1/admin/fleet/{orgId}/devices) read from Spanner and return enriched device entries with the latest diagnostics snapshot
Remote restart
Two remote-restart operations are available from the support portal and management-api:
- Device reboot (
POST .../devices/{terminalId}/restart): sends an FCM data message withaction: REBOOT. The gateway-ownedPeak PayAndroid reference host callsDevicePolicyManager.reboot()via the gateway device-owner API. - App restart (
POST .../devices/{terminalId}/app-restart): sends an FCM data message withaction: APP_RESTART. The gateway-ownedPeak PayAndroid reference host kills and restarts its own process.
Both operations require the terminal to be online and have a valid FCM token registered
via POST /v1/devices.
Geofence setup and violation monitoring
Geofences define a circular boundary around a store location. When a terminal's reported
GPS coordinates fall outside its assigned geofence radius, a violation record is created
in the geofence_violations Spanner table.
- Create/update/delete geofences via
/api/v1/admin/geofencesendpoints - List active violations via
GET /api/v1/admin/geofences/violations/org/{orgId} - Resolve violations manually via
POST .../violations/{violationId}/resolve
The customer portal and support portal both display geofence violation alerts when terminals are detected outside their assigned boundaries.
Device diagnostics pipeline
The full diagnostics pipeline:
Android heartbeat (60s interval)
-> POST /v1/terminal/heartbeat (terminal-api, mTLS)
-> terminal_diagnostics table (Spanner)
-> GET /api/v1/admin/fleet/{orgId}/devices (management-api)
-> Support portal fleet dashboard
Diagnostics fields include: battery level, charging state, Wi-Fi SSID/strength, cellular signal, storage free/total, RAM free/total, CPU temperature, app version, OS version, and GPS coordinates.
Cloud Run Revision Cleanup
Use infra/scripts/cleanup-cloud-run-revisions.sh to prune old revisions while keeping:
- The newest
Nrevisions per service (--keep, default3) - Any revisions currently serving traffic
Dry-run (no deletion):
infra/scripts/cleanup-cloud-run-revisions.sh \
--project pinpoint-payments \
--region us-east1 \
--service pinpointpos-tx-bundler
Apply deletion for tx-bundler:
infra/scripts/cleanup-cloud-run-revisions.sh \
--project pinpoint-payments \
--region us-east1 \
--service pinpointpos-tx-bundler \
--keep 3 \
--apply
Apply deletion for all Cloud Run services in region:
infra/scripts/cleanup-cloud-run-revisions.sh \
--project pinpoint-payments \
--region us-east1 \
--all-services \
--keep 3 \
--apply