Default avatar
npub1g8jq...3zcu
npub1g8jq...3zcu
# `get_file_hash` macro This project provides a Rust procedural macro, `get_file_hash!`, designed to compute the SHA-256 hash of a specified file at compile time. This hash is then embedded directly into your compiled executable. This feature is invaluable for: * **Integrity Verification:** Ensuring the deployed code hasn't been tampered with. * **Versioning:** Embedding a unique identifier linked to the exact source code version. * **Cache Busting:** Generating unique names for assets based on their content. ## Project Structure * `get_file_hash_core`: A foundational crate containing the `get_file_hash!` macro definition. * `get_file_hash`: The main library crate that re-exports the macro. * `src/bin/get_file_hash.rs`: An example executable demonstrating the macro's usage by hashing its own source file and updating this `README.md`. * `build.rs`: A build script that also utilizes the `get_file_hash!` macro to hash `Cargo.toml` during the build process. ## Usage of `get_file_hash!` Macro To use the `get_file_hash!` macro, ensure you have `get_file_hash` (or `get_file_hash_core` for direct usage) as a dependency in your `Cargo.toml`. ### Example ```rust use get_file_hash::get_file_hash; use get_file_hash::CARGO_TOML_HASH; use sha2::{Digest, Sha256}; fn main() { // The macro resolves the path relative to CARGO_MANIFEST_DIR let readme_hash = get_file_hash!("src/bin/readme.rs"); let lib_hash = get_file_hash!("src/lib.rs"); println!("The SHA-256 hash of src/lib.rs is: {}", lib_hash); println!("The SHA-256 hash of src/bin/readme.rs is: {}", readme_hash); println!("The SHA-256 hash of Cargo.toml is: {}", CARGO_TOML_HASH); } ``` ## Release ## [`README.md`](./README.md) ```bash cargo run --bin readme > README.md ``` ## [`src/bin/readme.rs`](src/bin/readme.rs) * **Target File:** `src/bin/readme.rs` ## NIP-34 Integration: Git Repository Events on Nostr This library provides a set of powerful macros and functions for integrating Git repository events with the Nostr protocol, adhering to the [NIP-34: Git Repositories on Nostr](https://github.com/nostr-protocol/nips/blob/master/34.md) specification. These tools allow you to publish various Git-related events to Nostr relays, enabling decentralized tracking and collaboration for your code repositories. ### Available NIP-34 Macros Each macro provides a convenient way to publish specific NIP-34 event kinds: * [`repository_announcement!`](#repository_announcement) * Publishes a `Repository Announcement` event (Kind 30617) to announce a new or updated Git repository. * [`publish_patch!`](#publish_patch) * Publishes a `Patch` event (Kind 1617) containing a Git patch (diff) for a specific commit. * [`publish_pull_request!`](#publish_pull_request) * Publishes a `Pull Request` event (Kind 1618) to propose changes and facilitate code review. * [`publish_pr_update!`](#publish_pr_update) * Publishes a `Pull Request Update` event (Kind 1619) to update an existing pull request. * [`publish_repository_state!`](#publish_repository_state) * Publishes a `Repository State` event (Kind 1620) to announce the current state of a branch (e.g., its latest commit). * [`publish_issue!`](#publish_issue) * Publishes an `Issue` event (Kind 1621) to report bugs, request features, or track tasks. ### Running NIP-34 Examples To see these macros in action, navigate to the `examples/` directory and run each example individually with the `nostr` feature enabled: ```bash cargo run --example repository_announcement --features nostr cargo run --example publish_patch --features nostr cargo run --example publish_pull_request --features nostr cargo run --example publish_pr_update --features nostr cargo run --example publish_repository_state --features nostr cargo run --example publish_issue --features nostr ``` * **SHA-256 Hash:** 6c6325c5a4c14f44cbda6ca53179ab3d6666ce7c916365668c6dd1d79215db59 * **Status:** Integrity Verified.. ## ## [`build.rs`](build.rs) * **Target File:** `build.rs` * **SHA-256 Hash:** 20c958c8cbb5c77cf5eb3763b6da149b61241d328df52d39b7aa97903305c889 * **Status:** Integrity Verified.. ## ## [`Cargo.toml`](Cargo.toml) * **Target File:** `Cargo.toml` * **SHA-256 Hash:** e3f392bf23b5fb40902acd313a8c76d1943060b6805ea8615de62f9baf0c6513 * **Status:** Integrity Verified.. ## ## [`src/lib.rs`](src/lib.rs) * **Target File:** `src/lib.rs` * **SHA-256 Hash:** 591593482a6c9aac8793aa1e488e613f52a4effb1ec3465fd9d6a54537f2b123 * **Status:** Integrity Verified..
[workspace] members = [".", "src/get_file_hash_core"] [workspace.package] version = "0.3.3" edition = "2024" license = "MIT" authors = ["gnostr admin@gnostr.org"] documentation = "https://github.com/gnostr-org/get_file_hash#readme" homepage = "https://github.com/gnostr-org/get_file_hash" repository = "https://github.com/gnostr-org/get_file_hash" description = "A utility crate providing a procedural macro to compute and embed file hashes at compile time." [package] name = "get_file_hash" version.workspace = true edition.workspace = true description.workspace = true repository.workspace = true homepage.workspace = true authors.workspace = true license.workspace = true [package.metadata.wix] upgrade-guid = "DED69220-26E3-4406-B564-7F2B58C56F57" path-guid = "8DB39A25-8B99-4C25-8CF5-4704353C7C6E" license = false eula = false [features] nostr = ["dep:nostr", "dep:nostr-sdk", "dep:hex"] [workspace.dependencies] get_file_hash_core = { features = ["nostr"], path = "src/get_file_hash_core", version = "0.3.3" } sha2 = "0.11.0" nostr = "0.44.2" nostr-sdk = "0.44.0" hex = "0.4.2" tokio = "1" serde_json = "1.0" csv = { version = "1.3.0", default-features = false } url = "2.5.0" reqwest = { version = "0.12.0", default-features = false } tempfile = "3.27.0" rand = "0.8" frost-secp256k1-tr = "3.0.0-rc.0" serial_test = { version = "3.4.0", features = ["test_logging"] } log = "0.4" [dependencies] get_file_hash_core = { workspace = true, features = ["nostr"] } sha2 = { workspace = true } nostr = { workspace = true, optional = true } nostr-sdk = { workspace = true, optional = true } hex = { workspace = true, optional = true } tokio = { workspace = true, features = ["full"] } frost-secp256k1-tr = { workspace = true } rand = { workspace = true } serde_json.workspace = true [build-dependencies] get_file_hash_core = { workspace = true, features = ["nostr"] } sha2 = { workspace = true } serde_json = { workspace = true } tokio = { workspace = true, features = ["full"] } nostr = { workspace = true } nostr-sdk = { workspace = true } hex = { workspace = true } # The profile that 'dist' will build with [profile.dist] inherits = "release" lto = "thin" [dev-dependencies] serial_test = { workspace = true }
# `build.rs` Documentation This document explains the functionality of the `build.rs` script in this project. The `build.rs` script is a special Rust file that, if present, Cargo will compile and run *before* compiling the rest of your package. It's typically used for tasks that need to be performed during the build process, such as generating code, setting environment variables, or performing conditional compilation. ## Core Functionality The `build.rs` script in this project performs the following key functions: 1. **Environment Variable Injection:** It computes various project-related values at compile time and injects them as environment variables (`CARGO_RUSTC_ENV=...`) that can be accessed by the main crate using `env!("VAR_NAME")`. This includes: * `CARGO_PKG_NAME`: The name of the current package (from `Cargo.toml`). * `CARGO_PKG_VERSION`: The version of the current package (from `Cargo.toml`). * `GIT_COMMIT_HASH`: The full commit hash of the current Git HEAD (if in a Git repository). * `GIT_BRANCH`: The name of the current Git branch (if in a Git repository). * `CARGO_TOML_HASH`: The SHA-256 hash of the `Cargo.toml` file. * `LIB_HASH`: The SHA-256 hash of the `src/lib.rs` file. * `BUILD_HASH`: The SHA-256 hash of the `build.rs` file itself. 2. **Rerun Conditions:** It tells Cargo when to re-run the build script. This ensures that the injected environment variables and any conditional compilation logic are up-to-date if relevant files change: * `Cargo.toml` * `src/lib.rs` * `build.rs` * `.git/HEAD` (to detect changes in the Git repository like new commits or branch switches). * `src/get_file_hash_core/src/online_relays_gps.csv` (conditionally, if the file exists). 3. **Conditional Nostr Event Publishing (Release Builds with `nostr` feature):** If the project is being compiled in **release mode (`--release`)** and the **`nostr` feature is enabled (`--features nostr`)**, the `build.rs` script will connect to Nostr relays and publish events. This is intended for "deterministic Nostr event build examples" as indicated by the comments in the file. * **Relay Management:** It retrieves a list of default relay URLs. During event publishing, it identifies and removes "unfriendly" or unresponsive relays (e.g., those with timeout, connection issues, or spam blocks) from the list for subsequent publications. * **File Hashing and Key Generation:** For each Git-tracked file (when in a Git repository), it computes its SHA-256 hash. This hash is then used to derive a Nostr `SecretKey`. * **Event Creation:** * **Individual File Events:** For each Git-tracked file, a Nostr `text_note` event is created. This event includes tags for: * `#file`: The path of the file. * `#version`: The package version. * `#commit`: The Git commit hash (if in a Git repository). * `#branch`: The Git branch name (if in a Git repository). * **Metadata Event:** It publishes a metadata event using `get_file_hash_core::publish_metadata_event`. * **Linking Event (Build Manifest):** After processing all individual files, if any events were published, a final "build manifest" `text_note` event is created. This event links to all the individual file events that were published during the build using event tags. * **Output Storage:** The JSON representation of successfully published Nostr events (specifically the `EventId`) is saved to `~/.gnostr/build/{package_version}/{file_path_str_sanitized}/{hash}/{public_key}/{event_id}.json`. This provides a local record of what was published. ### `publish_nostr_event_if_release` Function This asynchronous helper function is responsible for: * Adding relays to the Nostr client. * Connecting to relays. * Signing the provided `EventBuilder` to create an `Event`. * Sending the event to the configured relays. * Logging success or failure for each relay. * Identifying and removing unresponsive relays from the `relay_urls` list. * Saving the published event's JSON to the local filesystem. ### `should_remove_relay` Function This helper function determines if a relay should be considered "unfriendly" or unresponsive based on common error messages received during Nostr event publication. ## Usage To prevent 'Too many open files' errors, especially during builds and tests involving numerous file operations or subprocesses (like `git ls-files` or parallel test execution), it may be necessary to increase the file descriptor limit. * **For local development**: Run `ulimit -n 4096` in your terminal session before executing `cargo build` or `cargo test`. This setting is session-specific. * **For CI environments**: The `.github/workflows/rust.yml` workflow is configured to set `ulimit -n 4096` for relevant test steps to ensure consistent execution. The values set by `build.rs` can be accessed in your Rust code (e.g., `src/lib.rs`) at compile time using the `env!` macro. For example: ```rust pub const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION"); ``` The Nostr event publishing functionality of `build.rs` is primarily for release builds with the `nostr` feature enabled, allowing for the automatic, deterministic publication of project state to the Nostr network as part of the CI/CD pipeline. ## Example Commands To interact with the `build.rs` script's features, especially those related to Nostr event publishing, you can use the following `cargo` commands: * **Build in release mode with Nostr feature (verbose output):** ```bash cargo build --release --workspace --features nostr -vv ``` * **Run tests for `get_file_hash_core` sequentially with Nostr feature and verbose logging (as in CI):** ```bash RUST_LOG=info,nostr_sdk=debug,frost=debug cargo test -p get_file_hash_core --features nostr -- --test-threads 1 --nocapture ``` * **Run all workspace tests in release mode with Nostr feature:** ```bash cargo test --workspace --release --features nostr ``` * **Build `get_file_hash_core` in release mode with Nostr feature (very verbose output):** ```bash cargo build --release --features nostr -vv -p get_file_hash_core ``` * **Run `get_file_hash_core` tests in release mode with Nostr feature (very verbose output):** ```bash cargo test --release --features nostr -vv -p get_file_hash_core ```
name: Rust on: push: branches: [ "*" ] pull_request: branches: [ "*" ] env: CARGO_TERM_COLOR: always FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true RUST_LOG: info jobs: build: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest] features_args: ["", "--no-default-features", "--features nostr"] steps: - uses: actions/checkout@v4 - name: Build ${{ matrix.features_args }} run: cargo build --workspace --verbose ${{ matrix.features_args }} - name: Run workspace tests ${{ matrix.features_args }} run: | cargo test --workspace ${{ matrix.features_args }} -- --test-threads 1 - name: Run get_file_hash_core tests ${{ matrix.features_args }} shell: bash run: | if [[ "${{ matrix.features_args }}" == "--features nostr" ]]; then cargo test -p get_file_hash_core ${{ matrix.features_args }} -- --test-threads 1 --nocapture else cargo test -p get_file_hash_core ${{ matrix.features_args }} -- --test-threads 1 fi - name: Run get_file_hash tests ${{ matrix.features_args }} shell: bash run: | if [[ "${{ matrix.features_args }}" == "--features nostr" ]]; then cargo test -p get_file_hash ${{ matrix.features_args }} -- --test-threads 1 --nocapture else cargo test -p get_file_hash ${{ matrix.features_args }} -- --test-threads 1 fi - name: Build Release ${{ matrix.features_args }} run: cargo build --workspace --release ${{ matrix.features_args }}
# This file was autogenerated by dist: # # Copyright 2022-2024, axodotdev # SPDX-License-Identifier: MIT or Apache-2.0 # # CI that: # # * checks for a Git Tag that looks like a release # * builds artifacts with dist (archives, installers, hashes) # * uploads those artifacts to temporary workflow zip # * on success, uploads the artifacts to a GitHub Release # # Note that the GitHub Release will be created with a generated # title/body based on your changelogs. name: Release permissions: "contents": "write" # This task will run whenever you push a git tag that looks like a version # like "1.0.0", "v0.1.0-prerelease.1", "my-app/0.1.0", "releases/v1.0.0", etc. # Various formats will be parsed into a VERSION and an optional PACKAGE_NAME, where # PACKAGE_NAME must be the name of a Cargo package in your workspace, and VERSION # must be a Cargo-style SemVer Version (must have at least major.minor.patch). # # If PACKAGE_NAME is specified, then the announcement will be for that # package (erroring out if it doesn't have the given version or isn't dist-able). # # If PACKAGE_NAME isn't specified, then the announcement will be for all # (dist-able) packages in the workspace with that version (this mode is # intended for workspaces with only one dist-able package, or with all dist-able # packages versioned/released in lockstep). # # If you push multiple tags at once, separate instances of this workflow will # spin up, creating an independent announcement for each one. However, GitHub # will hard limit this to 3 tags per commit, as it will assume more tags is a # mistake. # # If there's a prerelease-style suffix to the version, then the release(s) # will be marked as a prerelease. on: push: tags: - "**[0-9]+.[0-9]+.[0-9]+*" jobs: # Run 'dist plan' (or host) to determine what tasks we need to do plan: runs-on: "ubuntu-22.04" outputs: val: ${{ steps.plan.outputs.manifest }} tag: ${{ !github.event.pull_request && github.ref_name || '' }} tag-flag: ${{ !github.event.pull_request && format('--tag={0}', github.ref_name) || '' }} publishing: ${{ !github.event.pull_request }} env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} steps: - uses: actions/checkout@v4 with: persist-credentials: false submodules: recursive #- name: Install sccache # run: | # cargo install sccache --version 0.4.0 --locked # echo "$HOME/.cargo/bin" >> $GITHUB_PATH - name: Install dist # we specify bash to get pipefail; it guards against the `curl` command # failing. otherwise `sh` won't catch that `curl` returned non-0 shell: bash run: "curl --proto '=https' --tlsv1.2 -LsSf https://github.com/axodotdev/cargo-dist/releases/download/v0.30.3/cargo-dist-installer.sh | sh" - name: Cache dist uses: actions/upload-artifact@v4 with: name: cargo-dist-cache path: ~/.cargo/bin/dist # sure would be cool if github gave us proper conditionals... # so here's a doubly-nested ternary-via-truthiness to try to provide the best possible # functionality based on whether this is a pull_request, and whether it's from a fork. # (PRs run on the *source* but secrets are usually on the *target* -- that's *good* # but also really annoying to build CI around when it needs secrets to work right.) - id: plan run: | dist ${{ (!github.event.pull_request && format('host --steps=create --tag={0}', github.ref_name)) || 'plan' }} --output-format=json > plan-dist-manifest.json echo "dist ran successfully" cat plan-dist-manifest.json echo "manifest=$(jq -c "." plan-dist-manifest.json)" >> "$GITHUB_OUTPUT" - name: "Upload dist-manifest.json" uses: actions/upload-artifact@v4 with: name: artifacts-plan-dist-manifest path: plan-dist-manifest.json # Build and packages all the platform-specific things build-local-artifacts: name: build-local-artifacts (${{ join(matrix.targets, ', ') }}) # Let the initial task tell us to not run (currently very blunt) needs: - plan if: ${{ fromJson(needs.plan.outputs.val).ci.github.artifacts_matrix.include != null && (needs.plan.outputs.publishing == 'true' || fromJson(needs.plan.outputs.val).ci.github.pr_run_mode == 'upload') }} strategy: fail-fast: false # Target platforms/runners are computed by dist in create-release. # Each member of the matrix has the following arguments: # # - runner: the github runner # - dist-args: cli flags to pass to dist # - install-dist: expression to run to install dist on the runner # # Typically there will be: # - 1 "global" task that builds universal installers # - N "local" tasks that build each platform's binaries and platform-specific installers matrix: ${{ fromJson(needs.plan.outputs.val).ci.github.artifacts_matrix }} runs-on: ${{ matrix.runner }} container: ${{ matrix.container && matrix.container.image || null }} env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} BUILD_MANIFEST_NAME: target/distrib/${{ join(matrix.targets, '-') }}-dist-manifest.json steps: - name: enable windows longpaths run: | git config --global core.longpaths true - uses: actions/checkout@v4 with: persist-credentials: false submodules: recursive #- name: Install sccache # run: | # cargo install sccache --version 0.4.0 --locked # echo "$HOME/.cargo/bin" >> $GITHUB_PATH - name: Install Rust non-interactively if not already installed if: ${{ matrix.container }} run: | if ! command -v cargo > /dev/null 2>&1; then curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y echo "$HOME/.cargo/bin" >> $GITHUB_PATH fi - name: Install dist run: ${{ matrix.install_dist.run }} # Get the dist-manifest - name: Fetch local artifacts uses: actions/download-artifact@v4 with: pattern: artifacts-* path: target/distrib/ merge-multiple: true - name: Install dependencies run: | ${{ matrix.packages_install }} - name: Build artifacts run: | # Actually do builds and make zips and whatnot dist build ${{ needs.plan.outputs.tag-flag }} --print=linkage --output-format=json ${{ matrix.dist_args }} > dist-manifest.json echo "dist ran successfully" - id: cargo-dist name: Post-build # We force bash here just because github makes it really hard to get values up # to "real" actions without writing to env-vars, and writing to env-vars has # inconsistent syntax between shell and powershell. shell: bash run: | # Parse out what we just built and upload it to scratch storage echo "paths<<EOF" >> "$GITHUB_OUTPUT" dist print-upload-files-from-manifest --manifest dist-manifest.json >> "$GITHUB_OUTPUT" echo "EOF" >> "$GITHUB_OUTPUT" cp dist-manifest.json "$BUILD_MANIFEST_NAME" - name: "Upload artifacts" uses: actions/upload-artifact@v4 with: name: artifacts-build-local-${{ join(matrix.targets, '_') }} path: | ${{ steps.cargo-dist.outputs.paths }} ${{ env.BUILD_MANIFEST_NAME }} # Build and package all the platform-agnostic(ish) things build-global-artifacts: needs: - plan - build-local-artifacts runs-on: "ubuntu-22.04" env: BUILD_MANIFEST_NAME: target/distrib/global-dist-manifest.json steps: - uses: actions/checkout@v4 with: persist-credentials: false submodules: recursive #- name: Install sccache # run: | # cargo install sccache --version 0.4.0 --locked # echo "$HOME/.cargo/bin" >> $GITHUB_PATH - name: Install cached dist uses: actions/download-artifact@v4 with: name: cargo-dist-cache path: ~/.cargo/bin/ - run: chmod +x ~/.cargo/bin/dist # Get all the local artifacts for the global tasks to use (for e.g. checksums) - name: Fetch local artifacts uses: actions/download-artifact@v4 with: pattern: artifacts-* path: target/distrib/ merge-multiple: true - id: cargo-dist shell: bash run: | dist build ${{ needs.plan.outputs.tag-flag }} --output-format=json "--artifacts=global" > dist-manifest.json echo "dist ran successfully" # Parse out what we just built and upload it to scratch storage echo "paths<<EOF" >> "$GITHUB_OUTPUT" jq --raw-output ".upload_files[]" dist-manifest.json >> "$GITHUB_OUTPUT" echo "EOF" >> "$GITHUB_OUTPUT" cp dist-manifest.json "$BUILD_MANIFEST_NAME" - name: "Upload artifacts" uses: actions/upload-artifact@v4 with: name: artifacts-build-global path: | ${{ steps.cargo-dist.outputs.paths }} ${{ env.BUILD_MANIFEST_NAME }} # Determines if we should publish/announce host: needs: - plan - build-local-artifacts - build-global-artifacts # Only run if we're "publishing", and only if plan, local and global didn't fail (skipped is fine) if: ${{ always() && needs.plan.result == 'success' && needs.plan.outputs.publishing == 'true' && (needs.build-global-artifacts.result == 'skipped' || needs.build-global-artifacts.result == 'success') && (needs.build-local-artifacts.result == 'skipped' || needs.build-local-artifacts.result == 'success') }} env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} runs-on: "ubuntu-22.04" outputs: val: ${{ steps.host.outputs.manifest }} steps: - uses: actions/checkout@v4 with: persist-credentials: false submodules: recursive #- name: Install sccache # run: | # cargo install sccache --version 0.4.0 --locked # echo "$HOME/.cargo/bin" >> $GITHUB_PATH - name: Install cached dist uses: actions/download-artifact@v4 with: name: cargo-dist-cache path: ~/.cargo/bin/ - run: chmod +x ~/.cargo/bin/dist # Fetch artifacts from scratch-storage - name: Fetch artifacts uses: actions/download-artifact@v4 with: pattern: artifacts-* path: target/distrib/ merge-multiple: true - id: host shell: bash run: | dist host ${{ needs.plan.outputs.tag-flag }} --steps=upload --steps=release --output-format=json > dist-manifest.json echo "artifacts uploaded and released successfully" cat dist-manifest.json echo "manifest=$(jq -c "." dist-manifest.json)" >> "$GITHUB_OUTPUT" - name: "Upload dist-manifest.json" uses: actions/upload-artifact@v4 with: # Overwrite the previous copy name: artifacts-dist-manifest path: dist-manifest.json # Create a GitHub Release while uploading all files to it - name: "Download GitHub Artifacts" uses: actions/download-artifact@v4 with: pattern: artifacts-* path: artifacts merge-multiple: true - name: Cleanup run: | # Remove the granular manifests rm -f artifacts/*-dist-manifest.json - name: Create GitHub Release env: PRERELEASE_FLAG: "${{ fromJson(steps.host.outputs.manifest).announcement_is_prerelease && '--prerelease' || '' }}" ANNOUNCEMENT_TITLE: "${{ fromJson(steps.host.outputs.manifest).announcement_title }}" ANNOUNCEMENT_BODY: "${{ fromJson(steps.host.outputs.manifest).announcement_github_body }}" RELEASE_COMMIT: "${{ github.sha }}" run: | # Write and read notes from a file to avoid quoting breaking things echo "$ANNOUNCEMENT_BODY" > $RUNNER_TEMP/notes.txt gh release create "${{ needs.plan.outputs.tag }}" --target "$RELEASE_COMMIT" $PRERELEASE_FLAG --title "$ANNOUNCEMENT_TITLE" --notes-file "$RUNNER_TEMP/notes.txt" artifacts/* publish-homebrew-formula: needs: - plan - host runs-on: "ubuntu-22.04" env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} PLAN: ${{ needs.plan.outputs.val }} GITHUB_USER: "axo bot" GITHUB_EMAIL: "admin+bot@axo.dev" if: ${{ !fromJson(needs.plan.outputs.val).announcement_is_prerelease || fromJson(needs.plan.outputs.val).publish_prereleases }} steps: - uses: actions/checkout@v4 with: persist-credentials: true repository: "gnostr-org/homebrew-gnostr" token: ${{ secrets.HOMEBREW_TAP_TOKEN }} # So we have access to the formula - name: Fetch homebrew formulae uses: actions/download-artifact@v4 with: pattern: artifacts-* path: Formula/ merge-multiple: true # This is extra complex because you can make your Formula name not match your app name # so we need to find releases with a *.rb file, and publish with that filename. - name: Commit formula files run: | git config --global user.name "${GITHUB_USER}" git config --global user.email "${GITHUB_EMAIL}" for release in $(echo "$PLAN" | jq --compact-output '.releases[] | select([.artifacts[] | endswith(".rb")] | any)'); do filename=$(echo "$release" | jq '.artifacts[] | select(endswith(".rb"))' --raw-output) name=$(echo "$filename" | sed "s/\.rb$//") version=$(echo "$release" | jq .app_version --raw-output) export PATH="/home/linuxbrew/.linuxbrew/bin:$PATH" brew update # We avoid reformatting user-provided data such as the app description and homepage. brew style --except-cops FormulaAudit/Homepage,FormulaAudit/Desc,FormulaAuditStrict --fix "Formula/${filename}" || true git add "Formula/${filename}" git commit -m "${name} ${version}" done git push origin HEAD announce: needs: - plan - host - publish-homebrew-formula # use "always() && ..." to allow us to wait for all publish jobs while # still allowing individual publish jobs to skip themselves (for prereleases). # "host" however must run to completion, no skipping allowed! if: ${{ always() && needs.host.result == 'success' && (needs.publish-homebrew-formula.result == 'skipped' || needs.publish-homebrew-formula.result == 'success') }} runs-on: "ubuntu-22.04" env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} steps: - uses: actions/checkout@v4 with: persist-credentials: false submodules: recursive
//! A simple command-line tool that calculates and displays the SHA-256 hash of //! its own source file. //! //! This utility demonstrates how to use the `get_file_hash!` macro to obtain //! the hash of a specified file at compile time and incorporate it into runtime //! logic. use get_file_hash::{BUILD_HASH, CARGO_TOML_HASH, LIB_HASH}; use get_file_hash_core::get_file_hash; use sha2::{Digest, Sha256}; const README_TEMPLATE_PART0: &str = r##"# `get_file_hash` macro This project provides a Rust procedural macro, `get_file_hash!`, designed to compute the SHA-256 hash of a specified file at compile time. This hash is then embedded directly into your compiled executable. This feature is invaluable for: * **Integrity Verification:** Ensuring the deployed code hasn't been tampered with. * **Versioning:** Embedding a unique identifier linked to the exact source code version. * **Cache Busting:** Generating unique names for assets based on their content. ## Project Structure * `get_file_hash_core`: A foundational crate containing the `get_file_hash!` macro definition. * `get_file_hash`: The main library crate that re-exports the macro. * `src/bin/get_file_hash.rs`: An example executable demonstrating the macro's usage by hashing its own source file and updating this `README.md`. * `build.rs`: A build script that also utilizes the `get_file_hash!` macro to hash `Cargo.toml` during the build process. ## Usage of `get_file_hash!` Macro To use the `get_file_hash!` macro, ensure you have `get_file_hash` (or `get_file_hash_core` for direct usage) as a dependency in your `Cargo.toml`. ### Example ```rust use get_file_hash::get_file_hash; use get_file_hash::CARGO_TOML_HASH; use sha2::{Digest, Sha256}; fn main() { // The macro resolves the path relative to CARGO_MANIFEST_DIR let readme_hash = get_file_hash!("src/bin/readme.rs"); let lib_hash = get_file_hash!("src/lib.rs"); println!("The SHA-256 hash of src/lib.rs is: {}", lib_hash); println!("The SHA-256 hash of src/bin/readme.rs is: {}", readme_hash); println!("The SHA-256 hash of Cargo.toml is: {}", CARGO_TOML_HASH); } ``` "##; const README_TEMPLATE_PART1: &str = r"## Release ## [`README.md`](./README.md) ```bash cargo run --bin readme > README.md ``` ## [`src/bin/readme.rs`](src/bin/readme.rs) * **Target File:** `src/bin/readme.rs` "; const README_TEMPLATE_PART2: &str = r"## ## [`build.rs`](build.rs) * **Target File:** `build.rs` "; const README_TEMPLATE_PART3: &str = r"## ## [`Cargo.toml`](Cargo.toml) * **Target File:** `Cargo.toml` "; const README_TEMPLATE_PART4: &str = r"## ## [`src/lib.rs`](src/lib.rs) * **Target File:** `src/lib.rs` "; const README_TEMPLATE_PART_NIP34: &str = r"## NIP-34 Integration: Git Repository Events on Nostr This library provides a set of powerful macros and functions for integrating Git repository events with the Nostr protocol, adhering to the [NIP-34: Git Repositories on Nostr](https://github.com/nostr-protocol/nips/blob/master/34.md) specification. These tools allow you to publish various Git-related events to Nostr relays, enabling decentralized tracking and collaboration for your code repositories. ### Available NIP-34 Macros Each macro provides a convenient way to publish specific NIP-34 event kinds: * [`repository_announcement!`](#repository_announcement) * Publishes a `Repository Announcement` event (Kind 30617) to announce a new or updated Git repository. * [`publish_patch!`](#publish_patch) * Publishes a `Patch` event (Kind 1617) containing a Git patch (diff) for a specific commit. * [`publish_pull_request!`](#publish_pull_request) * Publishes a `Pull Request` event (Kind 1618) to propose changes and facilitate code review. * [`publish_pr_update!`](#publish_pr_update) * Publishes a `Pull Request Update` event (Kind 1619) to update an existing pull request. * [`publish_repository_state!`](#publish_repository_state) * Publishes a `Repository State` event (Kind 1620) to announce the current state of a branch (e.g., its latest commit). * [`publish_issue!`](#publish_issue) * Publishes an `Issue` event (Kind 1621) to report bugs, request features, or track tasks. ### Running NIP-34 Examples To see these macros in action, navigate to the `examples/` directory and run each example individually with the `nostr` feature enabled: ```bash cargo run --example repository_announcement --features nostr cargo run --example publish_patch --features nostr cargo run --example publish_pull_request --features nostr cargo run --example publish_pr_update --features nostr cargo run --example publish_repository_state --features nostr cargo run --example publish_issue --features nostr ``` "; /// The main entry point of the application. /// /// This function calculates the SHA-256 hash of the `get_file_hash.rs` source /// file using a custom procedural macro and then prints the hash to the /// console. It also includes a basic integrity verification check. fn main() { // Calculate the SHA-256 hash of the current file (`readme.rs`) at // compile time. The `get_file_hash!` macro reads the file content and // computes its hash. let self_hash = get_file_hash!("readme.rs"); let status_message = if self_hash.starts_with("e3b0") { "Warning: This hash represents an empty file." } else { "Integrity Verified." }; let build_message = if BUILD_HASH.starts_with("e3b0") { "Warning: This hash represents an empty file." } else { "Integrity Verified." }; let cargo_message = if CARGO_TOML_HASH.starts_with("e3b0") { "Warning: This hash represents an empty file." } else { "Integrity Verified." }; let lib_message = if LIB_HASH.starts_with("e3b0") { "Warning: This hash represents an empty file." } else { "Integrity Verified." }; print!("{}{}{}", README_TEMPLATE_PART0, README_TEMPLATE_PART1, README_TEMPLATE_PART_NIP34); println!("* **SHA-256 Hash:** {}", self_hash); println!("* **Status:** {}.\n", status_message); // print!("{}", README_TEMPLATE_PART2); println!("* **SHA-256 Hash:** {}", BUILD_HASH); println!("* **Status:** {}.\n", build_message); // print!("{}", README_TEMPLATE_PART3); println!("* **SHA-256 Hash:** {}", CARGO_TOML_HASH); println!("* **Status:** {}.\n", cargo_message); // print!("{}", README_TEMPLATE_PART4); println!("* **SHA-256 Hash:** {}", LIB_HASH); println!("* **Status:** {}.\n", lib_message); }
//! A simple command-line tool that calculates and displays the SHA-256 hash of //! its own source file. //! //! This utility demonstrates how to use the `get_file_hash!` macro to obtain //! the hash of a specified file at compile time and incorporate it into runtime //! logic. use get_file_hash_core::get_file_hash; use sha2::{Digest, Sha256}; const README_TEMPLATE_PART1: &str = r##"# `get_file_hash` macro This project provides a Rust procedural macro, `get_file_hash!`, designed to compute the SHA-256 hash of a specified file at compile time. This hash is then embedded directly into your compiled executable. This feature is invaluable for: * **Integrity Verification:** Ensuring the deployed code hasn't been tampered with. * **Versioning:** Embedding a unique identifier linked to the exact source code version. * **Cache Busting:** Generating unique names for assets based on their content. ## Project Structure * `get_file_hash_core`: A foundational crate containing the `get_file_hash!` macro definition. * `get_file_hash`: The main library crate that re-exports the macro. * `src/bin/get_file_hash.rs`: An example executable demonstrating the macro's usage by hashing its own source file and updating this `README.md`. * `build.rs`: A build script that also utilizes the `get_file_hash!` macro to hash `Cargo.toml` during the build process. ## Usage of `get_file_hash!` Macro To use the `get_file_hash!` macro, ensure you have `get_file_hash` (or `get_file_hash_core` for direct usage) as a dependency in your `Cargo.toml`. ### Example ```rust use get_file_hash::get_file_hash; use sha2::{Digest, Sha256}; fn main() { // The macro resolves the path relative to CARGO_MANIFEST_DIR let file_hash = get_file_hash!("src/lib.rs"); println!("The SHA-256 hash of src/lib.rs is: {}", file_hash); } ``` "##; const README_TEMPLATE_PART2: &str = r"## Setup and Building 1. **Clone the repository:** ```bash git clone <repository-url> cd <repository-name> ``` 2. **Build the project:** ```bash cargo build ``` During the build, `build.rs` will execute and print the hash of `Cargo.toml`. 3. **Run the example executable:** ```bash cargo run --bin get_file_hash ``` This will print the hash of `src/bin/get_file_hash.rs` to your console. ## Updating this `README.md` The hash information in this `README.md` is automatically generated by running the example executable. To update it, execute: ```bash cargo run --bin get_file_hash > README.md ``` ## Current File Hash Information (of `src/bin/get_file_hash.rs`) * **Target File:** `src/bin/get_file_hash.rs` "; /// The main entry point of the application. /// /// This function calculates the SHA-256 hash of the `get_file_hash.rs` source /// file using a custom procedural macro and then prints the hash to the /// console. It also includes a basic integrity verification check. fn main() { // Calculate the SHA-256 hash of the current file (`get_file_hash.rs`) at // compile time. The `get_file_hash!` macro reads the file content and // computes its hash. let self_hash = get_file_hash!("get_file_hash.rs"); let status_message = if self_hash.starts_with("e3b0") { "Warning: This hash represents an empty file." } else { "Integrity Verified." }; print!("{}{}", README_TEMPLATE_PART1, README_TEMPLATE_PART2); println!("* **SHA-256 Hash:** `{}`", self_hash); println!("* **Status:** {}.\n", status_message); }
#[cfg(feature = "nostr")] use frost_secp256k1_tr as frost; #[cfg(feature = "nostr")] use rand::thread_rng; #[cfg(feature = "nostr")] use std::collections::BTreeMap; #[cfg(feature = "nostr")] fn main() -> Result<(), Box<dyn std::error::Error>> { let mut rng = thread_rng(); let max_signers = 3; let min_signers = 2; //////////////////////////////////////////////////////////////////////////// // Round 0: Key Generation (Trusted Dealer) //////////////////////////////////////////////////////////////////////////// // In a real P2P setup, you'd use Distributed Key Generation (DKG). // For local testing/simulations, the trusted dealer is faster. let (shares, pubkey_package) = frost::keys::generate_with_dealer( max_signers, min_signers, frost::keys::IdentifierList::Default, &mut rng, )?; // Verifying the public key exists let group_public_key = pubkey_package.verifying_key(); println!("Group Public Key: {:?}", group_public_key); //////////////////////////////////////////////////////////////////////////// // Round 1: Commitment //////////////////////////////////////////////////////////////////////////// let message = b"BIP-64MOD Consensus Proposal"; let mut signing_commitments = BTreeMap::new(); let mut participant_nonces = BTreeMap::new(); // Participants 1 and 2 decide to sign for i in 1..=min_signers { let identifier = frost::Identifier::try_from(i as u16)?; // Generate nonces and commitments let (nonces, commitments) = frost::round1::commit( shares[&identifier].signing_share(), &mut rng, ); signing_commitments.insert(identifier, commitments); participant_nonces.insert(identifier, nonces); } //////////////////////////////////////////////////////////////////////////// // Round 2: Signing //////////////////////////////////////////////////////////////////////////// let mut signature_shares = BTreeMap::new(); let signing_package = frost::SigningPackage::new(signing_commitments, message); for i in 1..=min_signers { let identifier = frost::Identifier::try_from(i as u16)?; let nonces = &participant_nonces[&identifier]; // Each participant produces a signature share let key_package: frost::keys::KeyPackage = shares[&identifier].clone().try_into()?; let share = frost::round2::sign(&signing_package, nonces, &key_package)?; signature_shares.insert(identifier, share); } //////////////////////////////////////////////////////////////////////////// // Finalization: Aggregation //////////////////////////////////////////////////////////////////////////// let group_signature = frost::aggregate( &signing_package, &signature_shares, &pubkey_package, )?; // Verification group_public_key.verify(message, &group_signature)?; println!("Threshold signature verified successfully!"); Ok(()) } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example trusted-dealer --features nostr"); }
#[cfg(feature = "nostr")] use frost_secp256k1_tr as frost; #[cfg(feature = "nostr")] use rand::thread_rng; #[cfg(feature = "nostr")] use std::collections::BTreeMap; /// A simplified ROAST Coordinator that manages signing sessions #[cfg(feature = "nostr")] struct RoastCoordinator { min_signers: u16, _message: Vec<u8>, commitments: BTreeMap<frost::Identifier, frost::round1::SigningCommitments>, nonces: BTreeMap<frost::Identifier, frost::round1::SigningNonces>, shares: BTreeMap<frost::Identifier, frost::round2::SignatureShare>, } #[cfg(feature = "nostr")] impl RoastCoordinator { fn new(min_signers: u16, message: &[u8]) -> Self { Self { min_signers, _message: message.to_vec(), commitments: BTreeMap::new(), nonces: BTreeMap::new(), shares: BTreeMap::new(), } } /// ROAST Logic: Collect commitments until we hit the threshold. /// In a real P2P system, this would be an async stream handler. fn add_commitment(&mut self, id: frost::Identifier, comms: frost::round1::SigningCommitments, nonces: frost::round1::SigningNonces) { if self.commitments.len() < self.min_signers as usize { self.commitments.insert(id, comms); self.nonces.insert(id, nonces); } } /// ROAST Logic: Collect signature shares. fn add_share(&mut self, id: frost::Identifier, share: frost::round2::SignatureShare) { if self.shares.len() < self.min_signers as usize { self.shares.insert(id, share); } } fn is_ready_to_sign(&self) -> bool { self.commitments.len() >= self.min_signers as usize } fn is_ready_to_aggregate(&self) -> bool { self.shares.len() >= self.min_signers as usize } } #[cfg(feature = "nostr")] fn main() -> Result<(), Box<dyn std::error::Error>> { let mut rng = thread_rng(); let (max_signers, min_signers) = (5, 3); let message = b"BIP-64MOD Context: ROAST Coordination"; // 1. Setup: Generate keys (Dealer mode for simulation) let (key_shares, pubkey_package) = frost::keys::generate_with_dealer( max_signers, min_signers, frost::keys::IdentifierList::Default, &mut rng, )?; let mut coordinator = RoastCoordinator::new(min_signers, message); // 2. Round 1: Asynchronous Commitment Collection // Simulate signers 1, 3, and 5 responding first (ROAST skips 2 and 4) for &id_num in &[1, 3, 5] { let id = frost::Identifier::try_from(id_num as u16)?; let (nonces, comms) = frost::round1::commit(key_shares[&id].signing_share(), &mut rng); // Signers store their nonces locally, send comms to coordinator coordinator.add_commitment(id, comms, nonces); // Note: Signer 2 was "offline", but ROAST doesn't care because we hit 3/5. } // 3. Round 2: Signing if coordinator.is_ready_to_sign() { let signing_package = frost::SigningPackage::new(coordinator.commitments.clone(), message); let mut temp_shares = BTreeMap::new(); for &id in coordinator.commitments.keys() { // In reality, coordinator sends signing_package to signers // Here we simulate the signers producing shares let nonces = &coordinator.nonces[&id]; let key_package: frost::keys::KeyPackage = key_shares[&id].clone().try_into()?; let share = frost::round2::sign(&signing_package, &nonces, &key_package)?; temp_shares.insert(id, share); } for (id, share) in temp_shares { coordinator.add_share(id, share); } } // 4. Finalization: Aggregation if coordinator.is_ready_to_aggregate() { let signing_package = frost::SigningPackage::new(coordinator.commitments.clone(), message); let group_signature = frost::aggregate( &signing_package, &coordinator.shares, &pubkey_package, )?; pubkey_package.verifying_key().verify(message, &group_signature)?; println!("ROAST-coordinated signature verified!"); } Ok(()) } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example roast-experiment --features nostr"); }
#[tokio::main] #[cfg(feature = "nostr")] #[allow(unused_imports)] async fn main() { use get_file_hash_core::repository_announcement; use get_file_hash_core::get_file_hash; use nostr_sdk::Keys; use sha2::{Digest, Sha256}; use nostr_sdk::EventId; use std::str::FromStr; let keys = Keys::generate(); let relay_urls = get_file_hash_core::get_relay_urls(); let project_name = "my-awesome-repo-example"; let description = "A fantastic new project example."; let clone_url = "git@github.com:user/my-awesome-repo-example.git"; // Dummy EventId for examples that require a build_manifest_event_id const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0"; let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap(); // Example 1: Without build_manifest_event_id println!("Publishing repository announcement without build_manifest_event_id..."); repository_announcement!( &keys, &relay_urls, project_name, description, clone_url, "../Cargo.toml" // Use a known file in your project ); println!("Repository announcement without build_manifest_event_id published."); // Example 2: With build_manifest_event_id println!("Publishing repository announcement with build_manifest_event_id..."); repository_announcement!( &keys, &relay_urls, project_name, description, clone_url, "../Cargo.toml", // Use a known file in your project Some(&dummy_build_manifest_id) ); println!("Repository announcement with build_manifest_event_id published."); } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example repository_announcement --features nostr"); }
#[tokio::main] #[cfg(feature = "nostr")] async fn main() { use get_file_hash_core::publish_repository_state; use nostr_sdk::Keys; let keys = Keys::generate(); let relay_urls = get_file_hash_core::get_relay_urls(); let d_tag = "my-awesome-repo-example"; let branch_name = "main"; let commit_id = "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0"; println!("Publishing repository state..."); publish_repository_state!( &keys, &relay_urls, d_tag, branch_name, commit_id ); println!("Repository state published."); } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example publish_repository_state --features nostr"); }
#[tokio::main] #[cfg(feature = "nostr")] async fn main() { use get_file_hash_core::publish_pull_request; use nostr_sdk::Keys; use nostr_sdk::EventId; use std::str::FromStr; let keys = Keys::generate(); let relay_urls = get_file_hash_core::get_relay_urls(); let d_tag = "my-awesome-repo-example"; let commit_id = "0123456789abcdef0123456789abcdef01234567"; let clone_url = "git@github.com:user/my-feature-branch.git"; let title = Some("Feat: Add new awesome feature example"); // Dummy EventId for examples that require a build_manifest_event_id const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0"; let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap(); // Example 1: Without title and build_manifest_event_id println!("Publishing pull request without title and build_manifest_event_id..."); publish_pull_request!( &keys, &relay_urls, d_tag, commit_id, clone_url ); println!("Pull request without title and build_manifest_event_id published."); // Example 2: With title but without build_manifest_event_id println!("Publishing pull request with title but without build_manifest_event_id..."); publish_pull_request!( &keys, &relay_urls, d_tag, commit_id, clone_url, title ); println!("Pull request with title but without build_manifest_event_id published."); // Example 3: With build_manifest_event_id but without title println!("Publishing pull request with build_manifest_event_id but without title..."); publish_pull_request!( &keys, &relay_urls, d_tag, commit_id, clone_url, None, // Explicitly pass None for title Some(&dummy_build_manifest_id) ); println!("Pull request with build_manifest_event_id but without title published."); // Example 4: With title and build_manifest_event_id println!("Publishing pull request with title and build_manifest_event_id..."); publish_pull_request!( &keys, &relay_urls, d_tag, commit_id, clone_url, title, Some(&dummy_build_manifest_id) ); println!("Pull request with title and build_manifest_event_id published."); } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example publish_pull_request --features nostr"); }
#[tokio::main] #[cfg(feature = "nostr")] async fn main() { use get_file_hash_core::publish_pr_update; use nostr_sdk::Keys; use nostr_sdk::EventId; use std::str::FromStr; let keys = Keys::generate(); let relay_urls = get_file_hash_core::get_relay_urls(); let d_tag = "my-awesome-repo-example"; let pr_event_id = EventId::from_str("f6e4d6a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9").unwrap(); // Example PR Event ID let updated_commit_id = "z9y8x7w6v5u4t3s2r1q0p9o8n7m6l5k4j3i2h1g0"; let updated_clone_url = "git@github.com:user/my-feature-branch-v2.git"; // Dummy EventId for examples that require a build_manifest_event_id const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0"; let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap(); // Example 1: Without build_manifest_event_id println!("Publishing PR update without build_manifest_event_id..."); publish_pr_update!( &keys, &relay_urls, d_tag, &pr_event_id, updated_commit_id, updated_clone_url ); println!("PR update without build_manifest_event_id published."); // Example 2: With build_manifest_event_id println!("Publishing PR update with build_manifest_event_id..."); publish_pr_update!( &keys, &relay_urls, d_tag, &pr_event_id, updated_commit_id, updated_clone_url, Some(&dummy_build_manifest_id) ); println!("PR update with build_manifest_event_id published."); } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example publish_pr_update --features nostr"); }
#[tokio::main] #[cfg(feature = "nostr")] async fn main() { use get_file_hash_core::publish_patch; use nostr_sdk::Keys; use nostr_sdk::EventId; use std::str::FromStr; let keys = Keys::generate(); let relay_urls = get_file_hash_core::get_relay_urls(); let d_tag = "my-awesome-repo-example"; let commit_id = "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0"; // Example commit ID // Dummy EventId for examples that require a build_manifest_event_id const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0"; let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap(); // Example 1: Without build_manifest_event_id println!("Publishing patch without build_manifest_event_id..."); publish_patch!( &keys, &relay_urls, d_tag, commit_id, "../Cargo.toml" // Use an existing file for the patch content ); println!("Patch without build_manifest_event_id published."); // Example 2: With build_manifest_event_id println!("Publishing patch with build_manifest_event_id..."); publish_patch!( &keys, &relay_urls, d_tag, commit_id, "../Cargo.toml", // Use an existing file for the patch content Some(&dummy_build_manifest_id) ); println!("Patch with build_manifest_event_id published."); } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example publish_patch --features nostr"); }
#[tokio::main] #[cfg(feature = "nostr")] async fn main() { use get_file_hash_core::publish_issue; use nostr_sdk::Keys; use nostr_sdk::EventId; use std::str::FromStr; let keys = Keys::generate(); let relay_urls = get_file_hash_core::get_relay_urls(); let d_tag = "my-awesome-repo-example"; let issue_id = "123"; let title = "Bug: Fix authentication flow example"; let content = "The authentication flow is currently broken when users try to log in with invalid credentials. It crashes instead of showing an error message."; // Dummy EventId for examples that require a build_manifest_event_id const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0"; let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap(); // Example 1: Without build_manifest_event_id println!("Publishing issue without build_manifest_event_id..."); publish_issue!( &keys, &relay_urls, d_tag, issue_id, title, content ); println!("Issue without build_manifest_event_id published."); // Example 2: With build_manifest_event_id println!("Publishing issue with build_manifest_event_id..."); publish_issue!( &keys, &relay_urls, d_tag, issue_id, title, content, Some(&dummy_build_manifest_id) ); println!("Issue with build_manifest_event_id published."); } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example publish_issue --features nostr"); }
#[cfg(feature = "nostr")] fn main() -> Result<(), Box<dyn std::error::Error>> { get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer() } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr"); }
#[cfg(feature = "nostr")] fn main() -> Result<(), Box<dyn std::error::Error>> { get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_coordinator() } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox --features nostr"); }
#[cfg(feature = "nostr")] use frost_secp256k1_tr as frost; // MUST use the -tr variant for BIP-340/Nostr #[cfg(feature = "nostr")] use rand::thread_rng; #[cfg(feature = "nostr")] use serde_json::json; #[cfg(feature = "nostr")] use sha2::{Digest, Sha256}; #[cfg(feature = "nostr")] use std::collections::BTreeMap; #[cfg(feature = "nostr")] use hex; #[cfg(feature = "nostr")] fn main() -> Result<(), Box<dyn std::error::Error>> { let mut rng = thread_rng(); let (max_signers, min_signers) = (3, 2); // 1. Setup Nostr Event Metadata let pubkey_hex = "79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798"; // Example let created_at = 1712050000; let kind = 1; let content = "Hello from ROAST threshold signatures!"; // 2. Serialize for Nostr ID (per NIP-01) let event_json = json!([ 0, pubkey_hex, created_at, kind, [], content ]).to_string(); let mut hasher = Sha256::new(); hasher.update(event_json.as_bytes()); let event_id = hasher.finalize(); // This 32-byte hash is our signing message // 3. FROST/ROAST Key Generation let (shares, pubkey_package) = frost::keys::generate_with_dealer( max_signers, min_signers, frost::keys::IdentifierList::Default, &mut rng, )?; // 4. ROAST Coordination Simulation (Round 1: Commitments) // In ROAST, the coordinator keeps a "session" open and collects commitments let mut session_commitments = BTreeMap::new(); let mut signer_nonces = BTreeMap::new(); // Signers 1 and 3 respond first (Signer 2 is offline/slow) for &id_val in &[1, 3] { let id = frost::Identifier::try_from(id_val as u16)?; let (nonces, comms) = frost::round1::commit(shares[&id].signing_share(), &mut rng); session_commitments.insert(id, comms); signer_nonces.insert(id, nonces); } // 5. Round 2: Signing the Nostr ID let signing_package = frost::SigningPackage::new(session_commitments, &event_id); let mut signature_shares = BTreeMap::new(); for (id, nonces) in signer_nonces { let key_package: frost::keys::KeyPackage = shares[&id].clone().try_into()?; let share = frost::round2::sign(&signing_package, &nonces, &key_package)?; signature_shares.insert(id, share); } // 6. Aggregate into a BIP-340 Signature let group_signature = frost::aggregate( &signing_package, &signature_shares, &pubkey_package, )?; // 7. Verification (using BIP-340 logic) pubkey_package.verifying_key().verify(&event_id, &group_signature)?; println!("Nostr Event ID: {}", hex::encode(event_id)); println!("Threshold Signature (BIP-340): {}", hex::encode(group_signature.serialize()?)); println!("Successfully signed Nostr event using ROAST/FROST!"); Ok(()) } #[cfg(not(feature = "nostr"))] fn main() { println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_bip_340 --features nostr"); }
# dist plan --output-format=json > plan-dist-manifest.json # Config for 'dist' [workspace] members = ["cargo:.", "cargo:src/get_file_hash_core"] # Config for 'dist' [dist] # The preferred dist version to use in CI (Cargo.toml SemVer syntax) cargo-dist-version = "0.30.3" # CI backends to support ci = "github" # The installers to generate for each app installers = ["shell", "powershell", "homebrew", "msi"] # A GitHub repo to push Homebrew formulas to tap = "gnostr-org/homebrew-gnostr-org" # Path that installers should place binaries in install-path = "CARGO_HOME" # Publish jobs to run in CI publish-jobs = ["homebrew"] # Whether to install an updater program install-updater = true # Target platforms to build apps for (Rust target-triple syntax) targets = ["aarch64-apple-darwin", "aarch64-unknown-linux-gnu", "x86_64-apple-darwin", "x86_64-unknown-linux-gnu", "x86_64-unknown-linux-musl", "x86_64-pc-windows-msvc"] # Skip checking whether the specified configuration files are up to date allow-dirty = ["ci"]