feat!: Rewrite build system and third-party dependencies (#1310)

This work was done over ~80 individual commits in the `cmake` branch,
which are now being merged back into `main`. As a roll-up commit, it is
too big to be reviewable, but each change was reviewed individually in
context of the `cmake` branch. After this, the `cmake` branch will be
renamed `cmake-porting-history` and preserved.

---------

Co-authored-by: Geoff Jukes <geoffjukes@users.noreply.github.com>
Co-authored-by: Bartek Zdanowski <bartek.zdanowski@gmail.com>
Co-authored-by: Carlos Bentzen <cadubentzen@gmail.com>
Co-authored-by: Dennis E. Mungai <2356871+Brainiarc7@users.noreply.github.com>
Co-authored-by: Cosmin Stejerean <cstejerean@gmail.com>
Co-authored-by: Carlos Bentzen <carlos.bentzen@bitmovin.com>
Co-authored-by: Cosmin Stejerean <cstejerean@meta.com>
Co-authored-by: Cosmin Stejerean <cosmin@offbytwo.com>
pull/1311/head
Joey Parrish 2023-12-01 09:32:19 -08:00 committed by GitHub
parent ba5c77155a
commit 3e71302ba4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3136 changed files with 13183 additions and 1518975 deletions

View File

@ -1,39 +1,63 @@
# GitHub Actions CI
## Actions
- `custom-actions/lint-packager`:
Lints Shaka Packager. You must pass `fetch-depth: 2` to `actions/checkout`
in order to provide enough history for the linter to tell which files have
changed.
- `custom-actions/build-packager`:
Builds Shaka Packager. Leaves build artifacts in the "artifacts" folder.
Requires OS-dependent and build-dependent inputs.
- `custom-actions/test-packager`:
Tests Shaka Packager. Requires OS-dependent and build-dependent inputs.
- `custom-actions/build-docs`:
Builds Shaka Packager docs.
## Reusable workflows
- `build.yaml`:
Build and test all combinations of OS & build settings. Also builds docs on
Linux.
## Workflows
- On PR:
- `build_and_test.yaml`:
Builds and tests all combinations of OS & build settings. Also builds
docs.
- On release tag:
- `github_release.yaml`:
Creates a draft release on GitHub, builds and tests all combinations of OS
& build settings, builds docs on all OSes, attaches static release binaries
to the draft release, then fully publishes the release.
- On release published:
- `docker_hub_release.yaml`:
Builds a Docker image to match the published GitHub release, then pushes it
to Docker Hub.
- `npm_release.yaml`:
Builds an NPM package to match the published GitHub release, then pushes it
to NPM.
- `update_docs.yaml`:
Builds updated docs and pushes them to the gh-pages branch.
- `build-docs.yaml`:
Build Packager docs. Runs only on Linux.
- `build-docker.yaml`:
Build the official Docker image.
- `lint.yaml`:
Lint Shaka Packager.
- `publish-docs.yaml`:
Publish Packager docs. Runs on the latest release.
- `publish-docker.yaml`:
Publish the official docker image. Runs on all releases.
- `publish-npm.yaml`:
Publish binaries to NPM. Runs on all releases.
- `test-linux-distros.yaml`:
Test the build on all Linux distros via docker.
## Composed workflows
- On PR (`pr.yaml`), invoke:
- `lint.yaml`
- `build.yaml`
- `build-docs.yaml`
- `build-docker.yaml`
- `test-linux-distros.yaml`
## Release workflow
- `release-please.yaml`
- Updates changelogs, version numbers based on conventional commits syntax
and semantic versioning
- https://conventionalcommits.org/
- https://semver.org/
- Generates/updates a PR on each push
- When the PR is merged, runs additional steps:
- Creates a GitHub release
- Invokes `publish-docs.yaml` to publish the docs
- Invokes `publish-docker.yaml` to publish the docker image
- Invokes `build.yaml`
- Attaches the binaries from `build.yaml` to the GitHub release
- Invokes `publish-npm.yaml` to publish the binaries to NPM
## Common workflows from shaka-project
- `sync-labels.yaml`
- `update-issues.yaml`
- `validate-pr-title.yaml`
## Required Repo Secrets
- `RELEASE_PLEASE_TOKEN`: A PAT for `shaka-bot` to run the `release-please`
action. If missing, the release workflow will use the default
`GITHUB_TOKEN`
- `DOCKERHUB_CI_USERNAME`: The username of the Docker Hub CI account
- `DOCKERHUB_CI_TOKEN`: An access token for Docker Hub
- To generate, visit https://hub.docker.com/settings/security
@ -47,3 +71,12 @@
- `NPM_PACKAGE_NAME`: Not a true "secret", but stored here to avoid someone
pushing bogus packages to NPM during CI testing from a fork
- In a fork, set to a private name which differs from the production one
## Repo Settings
Each of these workflow features can be enabled by creating a "GitHub
Environment" with the same name in your repo settings. Forks will not have
these enabled by default.
- `debug`: enable debugging via SSH after a failure
- `self_hosted`: enable self-hosted runners in the build matrix

37
.github/workflows/build-docker.yaml vendored Normal file
View File

@ -0,0 +1,37 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A workflow to build the official docker image.
name: Official Docker image
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
official_docker_image:
name: Build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ inputs.ref }}
submodules: recursive
- name: Build
shell: bash
run: docker buildx build .

77
.github/workflows/build-docs.yaml vendored Normal file
View File

@ -0,0 +1,77 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A reusable workflow to build Packager docs. Leaves docs output in the
# "gh-pages" folder. Only runs in Linux due to the dependency on doxygen,
# which we install with apt.
name: Build Docs
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# If true, start a debug SSH server on failures.
debug:
required: false
type: boolean
default: false
jobs:
docs:
name: Build docs
runs-on: ubuntu-latest
steps:
- name: Install dependencies
run: |
sudo apt install -y doxygen
python3 -m pip install \
sphinx==7.1.2 \
sphinxcontrib.plantuml \
recommonmark \
cloud_sptheme \
breathe
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ inputs.ref }}
- name: Generate docs
run: |
mkdir -p gh-pages
mkdir -p build
# Doxygen must run before Sphinx. Sphinx will refer to
# Doxygen-generated output when it builds its own docs.
doxygen docs/Doxyfile
# Now build the Sphinx-based docs.
make -C docs/ html
# Now move the generated outputs.
cp -a build/sphinx/html gh-pages/html
cp -a build/doxygen/html gh-pages/docs
cp docs/index.html gh-pages/index.html
# Now set permissions on the generated docs.
# https://github.com/actions/upload-pages-artifact#file-permissions
chmod -R +rX gh-pages/
- name: Upload docs artifacts
uses: actions/upload-pages-artifact@v2
with:
path: gh-pages
- name: Debug
uses: mxschmitt/action-tmate@v3.6
with:
limit-access-to-actor: true
if: failure() && inputs.debug

38
.github/workflows/build-matrix.json vendored Normal file
View File

@ -0,0 +1,38 @@
{
"comment1": "runners hosted by GitHub, always enabled",
"hosted": [
{
"os": "ubuntu-latest",
"os_name": "linux",
"target_arch": "x64",
"exe_ext": "",
"generator": "Ninja"
},
{
"os": "macos-latest",
"os_name": "osx",
"target_arch": "x64",
"exe_ext": "",
"generator": "Ninja"
},
{
"os": "windows-latest",
"os_name": "win",
"target_arch": "x64",
"exe_ext": ".exe",
"generator": ""
}
],
"comment2": "runners hosted by the owner, enabled by the 'self_hosted' environment being created on the repo",
"selfHosted": [
{
"os": "self-hosted-linux-arm64",
"os_name": "linux",
"target_arch": "arm64",
"exe_ext": "",
"generator": "Ninja",
"low_mem": "yes"
}
]
}

217
.github/workflows/build.yaml vendored Normal file
View File

@ -0,0 +1,217 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A reusable workflow to build and test Packager on every supported OS and
# architecture.
name: Build
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# If true, start a debug SSH server on failures.
debug:
required: false
type: boolean
default: false
# If true, enable self-hosted runners in the build matrix.
self_hosted:
required: false
type: boolean
default: false
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
# Configure the build matrix based on inputs. The list of objects in the
# build matrix contents can't be changed by conditionals, but it can be
# computed by another job and deserialized. This uses inputs.self_hosted to
# determine the build matrix, based on the metadata in build-matrix.json.
build_matrix_config:
name: Matrix configuration
runs-on: ubuntu-latest
outputs:
INCLUDE: ${{ steps.configure.outputs.INCLUDE }}
OS: ${{ steps.configure.outputs.OS }}
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.ref }}
- name: Configure Build Matrix
id: configure
shell: node {0}
run: |
const enableSelfHosted = ${{ inputs.self_hosted }};
// Use enableSelfHosted to decide what the build matrix below should
// include.
const {hosted, selfHosted} = require("${{ github.workspace }}/.github/workflows/build-matrix.json");
const include = enableSelfHosted ? hosted.concat(selfHosted) : hosted;
const os = include.map((config) => config.os);
// Output JSON objects consumed by the build matrix below.
const fs = require('fs');
fs.writeFileSync(process.env.GITHUB_OUTPUT,
[
`INCLUDE=${ JSON.stringify(include) }`,
`OS=${ JSON.stringify(os) }`,
].join('\n'),
{flag: 'a'});
// Log the outputs, for the sake of debugging this script.
console.log({enableSelfHosted, include, os});
build:
needs: build_matrix_config
strategy:
fail-fast: false
matrix:
include: ${{ fromJSON(needs.build_matrix_config.outputs.INCLUDE) }}
os: ${{ fromJSON(needs.build_matrix_config.outputs.OS) }}
build_type: ["Debug", "Release"]
lib_type: ["static", "shared"]
name: ${{ matrix.os_name }} ${{ matrix.target_arch }} ${{ matrix.build_type }} ${{ matrix.lib_type }}
runs-on: ${{ matrix.os }}
steps:
- name: Configure git to preserve line endings
# Otherwise, tests fail on Windows because "golden" test outputs will not
# have the correct line endings.
run: git config --global core.autocrlf false
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ inputs.ref }}
submodules: recursive
- name: Install Linux deps
if: runner.os == 'Linux'
# NOTE: CMake is already installed in GitHub Actions VMs, but not
# necessarily in a self-hosted runner.
run: |
sudo apt update && sudo apt install -y \
cmake \
ninja-build
- name: Install Mac deps
if: runner.os == 'macOS'
# NOTE: GitHub Actions VMs on Mac do not install ninja by default.
run: |
brew install ninja
- name: Generate build files
run: |
mkdir -p build/
if [[ "${{ matrix.lib_type }}" == "shared" ]]; then
BUILD_SHARED_LIBS="ON"
else
BUILD_SHARED_LIBS="OFF"
fi
# If set, override the default generator for the platform.
# Not every entry in the build matrix config defines this.
# If this is blank, CMake will choose the default generator.
export CMAKE_GENERATOR="${{ matrix.generator }}"
# If set, configure the build to restrict parallel operations.
# This helps us avoid the compiler failing due to a lack of RAM
# on our arm64 build devices (4GB RAM shared among 6 CPUs).
if [[ "${{ matrix.low_mem }}" != "" ]]; then
export PACKAGER_LOW_MEMORY_BUILD=yes
fi
cmake \
-DCMAKE_BUILD_TYPE="${{ matrix.build_type }}" \
-DBUILD_SHARED_LIBS="$BUILD_SHARED_LIBS" \
-S . \
-B build/
- name: Build
# This is a universal build command, which will call make on Linux and
# Visual Studio on Windows. Note that the VS generator is what cmake
# calls a "multi-configuration" generator, and so the desired build
# type must be specified for Windows.
run: cmake --build build/ --config "${{ matrix.build_type }}" --parallel
- name: Test
run: ctest -C "${{ matrix.build_type }}" -V --test-dir build/
- name: Publish Test Report
uses: mikepenz/action-junit-report@150e2f992e4fad1379da2056d1d1c279f520e058
if: ${{ always() }}
with:
report_paths: 'junit-reports/TEST-*.xml'
- name: Prepare artifacts (static release only)
run: |
BUILD_CONFIG="${{ matrix.build_type }}-${{ matrix.lib_type }}"
if [[ "$BUILD_CONFIG" != "Release-static" ]]; then
echo "Skipping artifacts for $BUILD_CONFIG."
exit 0
fi
# TODO: Check static executables?
echo "::group::Prepare artifacts folder"
mkdir artifacts
ARTIFACTS="$GITHUB_WORKSPACE/artifacts"
if [[ "${{ runner.os }}" == "Windows" ]]; then
cd build/packager/Release
else
cd build/packager
fi
echo "::endgroup::"
echo "::group::Strip executables"
strip packager${{ matrix.exe_ext }}
strip mpd_generator${{ matrix.exe_ext }}
echo "::endgroup::"
SUFFIX="-${{ matrix.os_name }}-${{ matrix.target_arch }}"
EXE_SUFFIX="$SUFFIX${{ matrix.exe_ext}}"
echo "::group::Copy packager"
cp packager${{ matrix.exe_ext }} $ARTIFACTS/packager$EXE_SUFFIX
echo "::endgroup::"
echo "::group::Copy mpd_generator"
cp mpd_generator${{ matrix.exe_ext }} $ARTIFACTS/mpd_generator$EXE_SUFFIX
echo "::endgroup::"
# The pssh-box bundle is OS and architecture independent. So only do
# it on this one OS and architecture, and give it a more generic
# filename.
if [[ '${{ matrix.os_name }}' == 'linux' && '${{ matrix.target_arch }}' == 'x64' ]]; then
echo "::group::Tar pssh-box"
tar -czf $ARTIFACTS/pssh-box.py.tar.gz pssh-box.py pssh-box-protos
echo "::endgroup::"
fi
- name: Upload static release build artifacts
uses: actions/upload-artifact@v3
if: matrix.build_type == 'Release' && matrix.lib_type == 'static'
with:
name: artifacts-${{ matrix.os_name }}-${{ matrix.target_arch }}
path: artifacts/*
if-no-files-found: error
retention-days: 5
- name: Debug
uses: mxschmitt/action-tmate@v3.6
with:
limit-access-to-actor: true
if: failure() && inputs.debug

View File

@ -1,145 +0,0 @@
name: Build and Test PR
# Builds and tests on all combinations of OS, build type, and library type.
# Also builds the docs.
#
# Runs when a pull request is opened or updated.
#
# Can also be run manually for debugging purposes.
on:
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
inputs:
ref:
description: "The ref to build and test."
required: False
# If another instance of this workflow is started for the same PR, cancel the
# old one. If a PR is updated and a new test run is started, the old test run
# will be cancelled automatically to conserve resources.
concurrency:
group: ${{ github.workflow }}-${{ github.event.inputs.ref || github.ref }}
cancel-in-progress: true
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
# This makes the merge base available for the C++ linter, so that it
# can tell which files have changed.
fetch-depth: 2
- name: Lint
uses: ./src/.github/workflows/custom-actions/lint-packager
build_and_test:
# Doesn't really "need" it, but let's not waste time on an expensive matrix
# build step just to cancel it because of a linter error.
needs: lint
strategy:
fail-fast: false
matrix:
# NOTE: macos-10.15 is required for now, to work around issues with our
# build system. The same is true for windows-2019. See related
# comments in
# .github/workflows/custom-actions/build-packager/action.yaml
os: ["ubuntu-latest", "macos-10.15", "windows-2019", "self-hosted-linux-arm64"]
build_type: ["Debug", "Release"]
lib_type: ["static", "shared"]
include:
- os: ubuntu-latest
os_name: linux
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: macos-10.15
os_name: osx
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: windows-2019
os_name: win
target_arch: x64
exe_ext: ".exe"
# 64-bit outputs on Windows go to a different folder name.
build_type_suffix: "_x64"
- os: self-hosted-linux-arm64
os_name: linux
target_arch: arm64
exe_ext: ""
build_type_suffix: ""
name: Build and test ${{ matrix.os_name }} ${{ matrix.target_arch }} ${{ matrix.build_type }} ${{ matrix.lib_type }}
runs-on: ${{ matrix.os }}
steps:
- name: Configure git to preserve line endings
# Otherwise, tests fail on Windows because "golden" test outputs will not
# have the correct line endings.
run: git config --global core.autocrlf false
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
- name: Build docs (Linux only)
if: runner.os == 'Linux'
uses: ./src/.github/workflows/custom-actions/build-docs
- name: Build Packager
uses: ./src/.github/workflows/custom-actions/build-packager
with:
os_name: ${{ matrix.os_name }}
target_arch: ${{ matrix.target_arch }}
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
- name: Test Packager
uses: ./src/.github/workflows/custom-actions/test-packager
with:
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
test_supported_linux_distros:
# Doesn't really "need" it, but let's not waste time on a series of docker
# builds just to cancel it because of a linter error.
needs: lint
name: Test builds on all supported Linux distros (using docker)
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
- name: Install depot tools
shell: bash
run: |
git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
touch depot_tools/.disable_auto_update
echo "${GITHUB_WORKSPACE}/depot_tools" >> $GITHUB_PATH
- name: Setup gclient
shell: bash
run: |
gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
# NOTE: the docker tests will do gclient runhooks, so skip hooks here.
gclient sync --nohooks
- name: Test all distros
shell: bash
run: ./src/packager/testing/dockers/test_dockers.sh

View File

@ -1,45 +0,0 @@
name: Build Shaka Packager Docs
description: |
A reusable action to build Shaka Packager docs.
Leaves docs output in the "gh-pages" folder.
Only runs in Linux due to the dependency on doxygen, which we install with
apt.
runs:
using: composite
steps:
- name: Install dependencies
shell: bash
run: |
echo "::group::Install dependencies"
sudo apt install -y doxygen
python3 -m pip install \
sphinxcontrib.plantuml \
recommonmark \
breathe
echo "::endgroup::"
- name: Generate docs
shell: bash
run: |
echo "::group::Prepare output folders"
mkdir -p gh-pages
cd src
mkdir -p out
echo "::endgroup::"
echo "::group::Build Doxygen docs"
# Doxygen must run before Sphinx. Sphinx will refer to
# Doxygen-generated output when it builds its own docs.
doxygen docs/Doxyfile
echo "::endgroup::"
echo "::group::Build Sphinx docs"
# Now build the Sphinx-based docs.
make -C docs/ html
echo "::endgroup::"
echo "::group::Move ouputs"
# Now move the generated outputs.
cp -a out/sphinx/html ../gh-pages/html
cp -a out/doxygen/html ../gh-pages/docs
cp docs/index.html ../gh-pages/index.html
echo "::endgroup::"

View File

@ -1,184 +0,0 @@
name: Build Shaka Packager
description: |
A reusable action to build Shaka Packager.
Leaves build artifacts in the "artifacts" folder.
inputs:
os_name:
description: The name of the OS (one word). Appended to artifact filenames.
required: true
target_arch:
description: The CPU architecture to target. We support x64, arm64.
required: true
lib_type:
description: A library type, either "static" or "shared".
required: true
build_type:
description: A build type, either "Debug" or "Release".
required: true
build_type_suffix:
description: A suffix to append to the build type in the output path.
required: false
default: ""
exe_ext:
description: The extension on executable files.
required: false
default: ""
runs:
using: composite
steps:
- name: Select Xcode 10.3 and SDK 10.14 (macOS only)
# NOTE: macOS 11 doesn't work with our (old) version of Chromium build,
# and the latest Chromium build doesn't work with Packager's build
# system. To work around this, we need an older SDK version, and to
# get that, we need an older XCode version. XCode 10.3 has SDK 10.14,
# which works.
shell: bash
run: |
if [[ "${{ runner.os }}" == "macOS" ]]; then
echo "::group::Select Xcode 10.3"
sudo xcode-select -s /Applications/Xcode_10.3.app/Contents/Developer
echo "::endgroup::"
fi
- name: Install c-ares (Linux only)
shell: bash
run: |
if [[ "${{ runner.os }}" == "Linux" ]]; then
echo "::group::Install c-ares"
sudo apt install -y libc-ares-dev
echo "::endgroup::"
fi
- name: Force Python 2 to support ancient build system (non-Linux only)
if: runner.os != 'Linux'
uses: actions/setup-python@v2
with:
python-version: '2.x'
- name: Force Python 2 to support ancient build system (Linux only)
if: runner.os == 'Linux'
shell: bash
run: |
echo "::group::Install python2"
sudo apt install -y python2
sudo ln -sf python2 /usr/bin/python
echo "::endgroup::"
- name: Install depot tools
shell: bash
run: |
echo "::group::Install depot_tools"
git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
touch depot_tools/.disable_auto_update
echo "${GITHUB_WORKSPACE}/depot_tools" >> $GITHUB_PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
echo "VPYTHON_BYPASS=manually managed python not supported by chrome operations" >> $GITHUB_ENV
# Work around an issue in depot_tools on Windows, where an unexpected exception type appears.
sed -e 's/except subprocess.CalledProcessError:/except:/' -i.bk depot_tools/git_cache.py
echo "::endgroup::"
- name: Build ninja (arm only)
shell: bash
run: |
# NOTE: There is no prebuilt copy of ninja for the "aarch64"
# architecture (as reported by "uname -p" on arm64). So we must build
# our own, as recommended by depot_tools when it fails to fetch a
# prebuilt copy for us.
# NOTE 2: It turns out that $GITHUB_PATH operates like a stack.
# Appending to that file places the new path at the beginning of $PATH
# for the next step, so this step must come _after_ installing
# depot_tools.
if [[ "${{ inputs.target_arch }}" == "arm64" ]]; then
echo "::group::Build ninja (arm-only)"
git clone https://github.com/ninja-build/ninja.git -b v1.8.2
# The --bootstrap option compiles ninja as well as configures it.
# This is the exact command prescribed by depot_tools when it fails to
# fetch a ninja binary for your platform.
(cd ninja && ./configure.py --bootstrap)
echo "${GITHUB_WORKSPACE}/ninja" >> $GITHUB_PATH
echo "::endgroup::"
fi
- name: Configure gclient
shell: bash
run: |
echo "::group::Configure gclient"
gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
echo "::endgroup::"
- name: Sync gclient
env:
GYP_DEFINES: "target_arch=${{ inputs.target_arch }} libpackager_type=${{ inputs.lib_type }}_library"
GYP_MSVS_VERSION: "2019"
GYP_MSVS_OVERRIDE_PATH: "C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise"
shell: bash
run: |
echo "::group::Sync gclient"
BUILD_CONFIG="${{ inputs.build_type }}-${{ inputs.lib_type }}"
if [[ "$BUILD_CONFIG" == "Release-static" && "${{ runner.os }}" == "Linux" ]]; then
# For static release builds, set these two additional flags for fully static binaries.
export GYP_DEFINES="$GYP_DEFINES disable_fatal_linker_warnings=1 static_link_binaries=1"
fi
gclient sync
echo "::endgroup::"
- name: Build
shell: bash
run: |
echo "::group::Build"
ninja -C src/out/${{ inputs.build_type }}${{ inputs.build_type_suffix }}
echo "::endgroup::"
- name: Prepare artifacts (static release only)
shell: bash
run: |
BUILD_CONFIG="${{ inputs.build_type }}-${{ inputs.lib_type }}"
if [[ "$BUILD_CONFIG" != "Release-static" ]]; then
echo "Skipping artifacts for $BUILD_CONFIG."
exit 0
fi
if [[ "${{ runner.os }}" == "Linux" ]]; then
echo "::group::Check for static executables"
(
cd src/out/Release${{ inputs.build_type_suffix }}
# Prove that we built static executables on Linux. First, check that
# the executables exist, and fail if they do not. Then check "ldd",
# which will fail if the executable is not dynamically linked. If
# "ldd" succeeds, we fail the workflow. Finally, we call "true" so
# that the last executed statement will be a success, and the step
# won't be failed if we get that far.
ls packager mpd_generator >/dev/null || exit 1
ldd packager 2>&1 && exit 1
ldd mpd_generator 2>&1 && exit 1
true
)
echo "::endgroup::"
fi
echo "::group::Prepare artifacts folder"
mkdir artifacts
ARTIFACTS="$GITHUB_WORKSPACE/artifacts"
cd src/out/Release${{ inputs.build_type_suffix }}
echo "::endgroup::"
echo "::group::Strip executables"
strip packager${{ inputs.exe_ext }}
strip mpd_generator${{ inputs.exe_ext }}
echo "::endgroup::"
SUFFIX="-${{ inputs.os_name }}-${{ inputs.target_arch }}"
EXE_SUFFIX="$SUFFIX${{ inputs.exe_ext}}"
echo "::group::Copy packager"
cp packager${{ inputs.exe_ext }} $ARTIFACTS/packager$EXE_SUFFIX
echo "::endgroup::"
echo "::group::Copy mpd_generator"
cp mpd_generator${{ inputs.exe_ext }} $ARTIFACTS/mpd_generator$EXE_SUFFIX
echo "::endgroup::"
# The pssh-box bundle is OS and architecture independent. So only do
# it on this one OS and architecture, and give it a more generic
# filename.
if [[ '${{ inputs.os_name }}' == 'linux' && '${{ inputs.target_arch }}' == 'x64' ]]; then
echo "::group::Tar pssh-box"
tar -czf $ARTIFACTS/pssh-box.py.tar.gz pyproto pssh-box.py
echo "::endgroup::"
fi

View File

@ -1,36 +0,0 @@
name: Lint Shaka Packager
description: |
A reusable action to lint Shaka Packager source.
When checking out source, you must use 'fetch-depth: 2' in actions/checkout,
or else the linter won't have another revision to compare to.
runs:
using: composite
steps:
- name: Lint
shell: bash
run: |
cd src/
echo "::group::Installing git-clang-format"
wget https://raw.githubusercontent.com/llvm-mirror/clang/master/tools/clang-format/git-clang-format
sudo install -m 755 git-clang-format /usr/local/bin/git-clang-format
rm git-clang-format
echo "::endgroup::"
echo "::group::Installing pylint"
python3 -m pip install --upgrade pylint==2.8.3
echo "::endgroup::"
echo "::group::Check clang-format for C++ sources"
# NOTE: --binary forces use of global clang-format (which works) instead
# of depot_tools clang-format (which doesn't).
# NOTE: Must use base.sha instead of base.ref, since we don't have
# access to the branch name that base.ref would give us.
# NOTE: Must also use fetch-depth: 2 in actions/checkout to have access
# to the base ref for comparison.
packager/tools/git/check_formatting.py \
--binary /usr/bin/clang-format \
${{ github.event.pull_request.base.sha || 'HEAD^' }}
echo "::endgroup::"
echo "::group::Check pylint for Python sources"
packager/tools/git/check_pylint.py
echo "::endgroup::"

View File

@ -1,45 +0,0 @@
name: Test Shaka Packager
description: |
A reusable action to test Shaka Packager.
Should be run after building Shaka Packager.
inputs:
lib_type:
description: A library type, either "static" or "shared".
required: true
build_type:
description: A build type, either "Debug" or "Release".
required: true
build_type_suffix:
description: A suffix to append to the build type in the output path.
required: false
default: ""
exe_ext:
description: The extension on executable files.
required: false
default: ""
runs:
using: composite
steps:
- name: Test
shell: bash
run: |
echo "::group::Prepare test environment"
# NOTE: Some of these tests must be run from the "src" directory.
cd src/
OUTDIR=out/${{ inputs.build_type }}${{ inputs.build_type_suffix }}
if [[ '${{ runner.os }}' == 'macOS' ]]; then
export DYLD_FALLBACK_LIBRARY_PATH=$OUTDIR
fi
echo "::endgroup::"
for i in $OUTDIR/*test${{ inputs.exe_ext }}; do
echo "::group::Test $i"
"$i" || exit 1
echo "::endgroup::"
done
echo "::group::Test $OUTDIR/packager_test.py"
python3 $OUTDIR/packager_test.py \
-v --libpackager_type=${{ inputs.lib_type }}_library
echo "::endgroup::"

View File

@ -1,47 +0,0 @@
name: Docker Hub Release
# Runs when a new release is published on GitHub.
# Creates a corresponding Docker Hub release and publishes it.
#
# Can also be run manually for debugging purposes.
on:
release:
types: [published]
# For manual debugging:
workflow_dispatch:
inputs:
ref:
description: "The tag to release to Docker Hub."
required: True
jobs:
publish_docker_hub:
name: Publish to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Compute ref
id: ref
# We could be building from a workflow dispatch (manual run), or a
# release event. Subsequent steps can refer to $TARGET_REF to
# determine the correct ref in all cases.
run: |
echo "TARGET_REF=${{ github.event.inputs.ref || github.event.release.tag_name }}" >> $GITHUB_ENV
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ env.TARGET_REF }}
- name: Log in to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_CI_USERNAME }}
password: ${{ secrets.DOCKERHUB_CI_TOKEN }}
- name: Push to Docker Hub
uses: docker/build-push-action@v2
with:
push: true
context: src/
tags: ${{ secrets.DOCKERHUB_PACKAGE_NAME }}:latest,${{ secrets.DOCKERHUB_PACKAGE_NAME }}:${{ env.TARGET_REF }}

View File

@ -1,228 +0,0 @@
name: GitHub Release
# Runs when a new tag is created that looks like a version number.
#
# 1. Creates a draft release on GitHub with the latest release notes
# 2. On all combinations of OS, build type, and library type:
# a. builds Packager
# b. builds the docs
# c. runs all tests
# d. attaches build artifacts to the release
# 3. Fully publishes the release on GitHub
#
# Publishing the release then triggers additional workflows for NPM, Docker
# Hub, and GitHub Pages.
#
# Can also be run manually for debugging purposes.
on:
push:
tags:
- "v*.*"
# For manual debugging:
workflow_dispatch:
inputs:
tag:
description: "An existing tag to release."
required: True
jobs:
setup:
name: Setup
runs-on: ubuntu-latest
outputs:
tag: ${{ steps.compute_tag.outputs.tag }}
steps:
- name: Compute tag
id: compute_tag
# We could be building from a workflow dispatch (manual run)
# or from a pushed tag. If triggered from a pushed tag, we would like
# to strip refs/tags/ off of the incoming ref and just use the tag
# name. Subsequent jobs can refer to the "tag" output of this job to
# determine the correct tag name in all cases.
run: |
# Strip refs/tags/ from the input to get the tag name, then store
# that in output.
echo "::set-output name=tag::${{ github.event.inputs.tag || github.ref }}" \
| sed -e 's@refs/tags/@@'
draft_release:
name: Create GitHub release
needs: setup
runs-on: ubuntu-latest
outputs:
release_id: ${{ steps.draft_release.outputs.id }}
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ needs.setup.outputs.tag }}
- name: Check changelog version
# This check prevents releases without appropriate changelog updates.
run: |
cd src
VERSION=$(packager/tools/extract_from_changelog.py --version)
if [[ "$VERSION" != "${{ needs.setup.outputs.tag }}" ]]; then
echo ""
echo ""
echo "***** ***** *****"
echo ""
echo "Version mismatch!"
echo "Workflow is targetting ${{ needs.setup.outputs.tag }},"
echo "but CHANGELOG.md contains $VERSION!"
exit 1
fi
- name: Extract release notes
run: |
cd src
packager/tools/extract_from_changelog.py --release_notes \
| tee ../RELEASE_NOTES.md
- name: Draft release
id: draft_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ needs.setup.outputs.tag }}
release_name: ${{ needs.setup.outputs.tag }}
body_path: RELEASE_NOTES.md
draft: true
lint:
needs: setup
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ needs.setup.outputs.tag }}
# This makes the merge base available for the C++ linter, so that it
# can tell which files have changed.
fetch-depth: 2
- name: Lint
uses: ./src/.github/workflows/custom-actions/lint-packager
build_and_test:
needs: [setup, lint, draft_release]
strategy:
matrix:
os: ["ubuntu-latest", "macos-latest", "windows-2019", "self-hosted-linux-arm64"]
build_type: ["Debug", "Release"]
lib_type: ["static", "shared"]
include:
- os: ubuntu-latest
os_name: linux
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: macos-latest
os_name: osx
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: windows-2019
os_name: win
target_arch: x64
exe_ext: ".exe"
# 64-bit outputs on Windows go to a different folder name.
build_type_suffix: "_x64"
- os: self-hosted-linux-arm64
os_name: linux
target_arch: arm64
exe_ext: ""
build_type_suffix: ""
name: Build and test ${{ matrix.os_name }} ${{ matrix.target_arch }} ${{ matrix.build_type }} ${{ matrix.lib_type }}
runs-on: ${{ matrix.os }}
steps:
- name: Configure git to preserve line endings
# Otherwise, tests fail on Windows because "golden" test outputs will not
# have the correct line endings.
run: git config --global core.autocrlf false
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ needs.setup.outputs.tag }}
- name: Build docs (Linux only)
if: runner.os == 'Linux'
uses: ./src/.github/workflows/custom-actions/build-docs
- name: Build Packager
uses: ./src/.github/workflows/custom-actions/build-packager
with:
os_name: ${{ matrix.os_name }}
target_arch: ${{ matrix.target_arch }}
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
- name: Test Packager
uses: ./src/.github/workflows/custom-actions/test-packager
with:
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
- name: Attach artifacts to release
if: matrix.build_type == 'Release' && matrix.lib_type == 'static'
uses: dwenegar/upload-release-assets@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
release_id: ${{ needs.draft_release.outputs.release_id }}
assets_path: artifacts
test_supported_linux_distros:
# Doesn't really "need" it, but let's not waste time on a series of docker
# builds just to cancel it because of a linter error.
needs: lint
name: Test builds on all supported Linux distros (using docker)
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
- name: Install depot tools
shell: bash
run: |
git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
touch depot_tools/.disable_auto_update
echo "${GITHUB_WORKSPACE}/depot_tools" >> $GITHUB_PATH
- name: Setup gclient
shell: bash
run: |
gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
# NOTE: the docker tests will do gclient runhooks, so skip hooks here.
gclient sync --nohooks
- name: Test all distros
shell: bash
run: ./src/packager/testing/dockers/test_dockers.sh
publish_release:
name: Publish GitHub release
needs: [draft_release, build_and_test]
runs-on: ubuntu-latest
steps:
- name: Publish release
uses: eregon/publish-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
release_id: ${{ needs.draft_release.outputs.release_id }}

54
.github/workflows/lint.yaml vendored Normal file
View File

@ -0,0 +1,54 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A workflow to lint Shaka Packager.
name: Lint
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ inputs.ref }}
# We must use 'fetch-depth: 2', or else the linter won't have another
# revision to compare to.
fetch-depth: 2
- name: Lint
shell: bash
run: |
wget https://raw.githubusercontent.com/llvm-mirror/clang/master/tools/clang-format/git-clang-format
sudo install -m 755 git-clang-format /usr/local/bin/git-clang-format
rm git-clang-format
python3 -m pip install --upgrade pylint==2.8.3
# NOTE: Must use base.sha instead of base.ref, since we don't have
# access to the branch name that base.ref would give us.
# NOTE: Must also use fetch-depth: 2 in actions/checkout to have
# access to the base ref for comparison.
packager/tools/git/check_formatting.py \
--binary /usr/bin/clang-format \
${{ github.event.pull_request.base.sha || 'HEAD^' }}
packager/tools/git/check_pylint.py

View File

@ -1,54 +0,0 @@
name: NPM Release
# Runs when a new release is published on GitHub.
# Creates a corresponding NPM release and publishes it.
#
# Can also be run manually for debugging purposes.
on:
release:
types: [published]
# For manual debugging:
workflow_dispatch:
inputs:
ref:
description: "The tag to release to NPM."
required: True
jobs:
publish_npm:
name: Publish to NPM
runs-on: ubuntu-latest
steps:
- name: Compute ref
id: ref
# We could be building from a workflow dispatch (manual run), or a
# release event. Subsequent steps can refer to $TARGET_REF to
# determine the correct ref in all cases.
run: |
echo "TARGET_REF=${{ github.event.inputs.ref || github.event.release.tag_name }}" >> $GITHUB_ENV
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ env.TARGET_REF }}
- name: Setup NodeJS
uses: actions/setup-node@v1
with:
node-version: 10
- name: Set package name and version
run: |
cd src/npm
sed package.json -i \
-e 's/"name": ""/"name": "${{ secrets.NPM_PACKAGE_NAME }}"/' \
-e 's/"version": ""/"version": "${{ env.TARGET_REF }}"/'
- name: Publish NPM package
uses: JS-DevTools/npm-publish@v1
with:
token: ${{ secrets.NPM_CI_TOKEN }}
package: src/npm/package.json
check-version: false
access: public

74
.github/workflows/pr.yaml vendored Normal file
View File

@ -0,0 +1,74 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
name: Build and Test PR
# Builds and tests on all combinations of OS, build type, and library type.
# Also builds the docs.
#
# Runs when a pull request is opened or updated.
#
# Can also be run manually for debugging purposes.
on:
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
inputs:
ref:
description: "The ref to build and test."
required: false
type: string
# If another instance of this workflow is started for the same PR, cancel the
# old one. If a PR is updated and a new test run is started, the old test run
# will be cancelled automatically to conserve resources.
concurrency:
group: ${{ github.workflow }}-${{ inputs.ref || github.ref }}
cancel-in-progress: true
jobs:
settings:
name: Settings
uses: ./.github/workflows/settings.yaml
lint:
name: Lint
uses: ./.github/workflows/lint.yaml
with:
ref: ${{ inputs.ref || github.ref }}
build_and_test:
needs: [lint, settings]
name: Build and test
uses: ./.github/workflows/build.yaml
with:
ref: ${{ inputs.ref || github.ref }}
self_hosted: ${{ needs.settings.outputs.self_hosted != '' }}
debug: ${{ needs.settings.outputs.debug != '' }}
build_docs:
needs: [lint, settings]
name: Build docs
uses: ./.github/workflows/build-docs.yaml
with:
ref: ${{ inputs.ref || github.ref }}
debug: ${{ needs.settings.outputs.debug != '' }}
official_docker_image:
needs: lint
name: Official Docker image
uses: ./.github/workflows/build-docker.yaml
with:
ref: ${{ inputs.ref || github.ref }}
test_supported_linux_distros:
# Doesn't really "need" it, but let's not waste time on a series of docker
# builds just to cancel it because of a linter error.
needs: lint
name: Test Linux distros
uses: ./.github/workflows/test-linux-distros.yaml
with:
ref: ${{ inputs.ref || github.ref }}

68
.github/workflows/publish-docker.yaml vendored Normal file
View File

@ -0,0 +1,68 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A workflow to publish the official docker image.
name: Publish to Docker Hub
# Runs when called from another workflow.
# Can also be run manually for debugging purposes.
on:
workflow_call:
inputs:
tag:
required: true
type: string
latest:
required: true
type: boolean
secrets:
DOCKERHUB_CI_USERNAME:
required: true
DOCKERHUB_CI_TOKEN:
required: true
DOCKERHUB_PACKAGE_NAME:
required: true
# For manual debugging:
workflow_dispatch:
inputs:
tag:
description: The tag to build from and to push to.
required: true
type: string
latest:
description: If true, push to the "latest" tag.
required: true
type: boolean
jobs:
publish_docker_hub:
name: Publish to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ inputs.tag }}
submodules: recursive
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_CI_USERNAME }}
password: ${{ secrets.DOCKERHUB_CI_TOKEN }}
- name: Push to Docker Hub
uses: docker/build-push-action@v5
with:
push: true
tags: ${{ secrets.DOCKERHUB_PACKAGE_NAME }}:${{ inputs.tag }}
- name: Push to Docker Hub as "latest"
if: ${{ inputs.latest }}
uses: docker/build-push-action@v5
with:
push: true
tags: ${{ secrets.DOCKERHUB_PACKAGE_NAME }}:latest

51
.github/workflows/publish-docs.yaml vendored Normal file
View File

@ -0,0 +1,51 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A workflow to publish the docs to GitHub Pages.
name: Publish Docs
# Runs when called from another workflow.
# Can also be run manually for debugging purposes.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# For manual debugging:
workflow_dispatch:
inputs:
ref:
description: "The ref to build docs from."
required: true
type: string
jobs:
build_docs:
name: Build docs
uses: ./.github/workflows/build-docs.yaml
with:
ref: ${{ inputs.ref }}
publish_docs:
name: Publish updated docs
needs: build_docs
runs-on: ubuntu-latest
# Grant GITHUB_TOKEN the permissions required to deploy to Pages
permissions:
pages: write
id-token: write
# Deploy to the github-pages environment
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2

100
.github/workflows/publish-npm.yaml vendored Normal file
View File

@ -0,0 +1,100 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A workflow to publish the official NPM package.
name: Publish to NPM
# Runs when called from another workflow.
# Can also be run manually for debugging purposes.
on:
workflow_call:
inputs:
tag:
required: true
type: string
latest:
required: true
type: boolean
secrets:
NPM_CI_TOKEN:
required: true
NPM_PACKAGE_NAME:
required: true
# For manual debugging:
workflow_dispatch:
inputs:
tag:
description: The tag to build from.
required: true
type: string
latest:
description: If true, push to the "latest" tag.
required: true
type: boolean
jobs:
publish:
name: Publish
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
ref: ${{ inputs.tag }}
submodules: recursive
- uses: actions/setup-node@v4
with:
node-version: 16
registry-url: 'https://registry.npmjs.org'
- name: Compute tags
run: |
# NPM publish always sets a tag. If you don't provide an explicit
# tag, you get the "latest" tag by default, but we want "latest" to
# always point to the highest version number. So we set an explicit
# tag on every "publish" command, then follow up with a command to
# set "latest" only if this release was the highest version yet.
# The explicit tag is based on the branch. If the git tag is v4.4.1,
# the branch was v4.4.x, and the explicit NPM tag should be
# v4.4-latest.
GIT_TAG_NAME=${{ inputs.tag }}
NPM_TAG=$(echo "$GIT_TAG_NAME" | cut -f 1-2 -d .)-latest
echo "NPM_TAG=$NPM_TAG" >> $GITHUB_ENV
# Since we also set the package version on-the-fly during publication,
# compute that here. It's the tag without the leading "v".
NPM_VERSION=$(echo "$GIT_TAG_NAME" | sed -e 's/^v//')
echo "NPM_VERSION=$NPM_VERSION" >> $GITHUB_ENV
# Debug the decisions made here.
echo "This release: $GIT_TAG_NAME"
echo "NPM tag: $NPM_TAG"
echo "NPM version: $NPM_VERSION"
- name: Set package name and version
run: |
# These substitutions use | because the package name could contain
# both / and @, but | should not appear in package names.
sed npm/package.json -i \
-e 's|"name": ""|"name": "${{ secrets.NPM_PACKAGE_NAME }}"|' \
-e 's|"version": ""|"version": "${{ env.NPM_VERSION }}"|'
- name: Publish
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_CI_TOKEN }}
run: |
cd npm
# Publish with an explicit tag.
# Also publish with explicit public access, to allow scoped packages.
npm publish --tag "$NPM_TAG" --access=public
# Add the "latest" tag if needed.
if [[ "${{ inputs.latest }}" == "true" ]]; then
npm dist-tag add "${{ secrets.NPM_PACKAGE_NAME }}@$NPM_VERSION" latest
fi

146
.github/workflows/release-please.yaml vendored Normal file
View File

@ -0,0 +1,146 @@
# Copyright 2023 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
name: Release
on:
push:
branches:
- main
- v[0-9]*
jobs:
release:
runs-on: ubuntu-latest
outputs:
release_created: ${{ steps.release.outputs.release_created }}
tag_name: ${{ steps.release.outputs.tag_name }}
steps:
# Create/update release PR
- uses: google-github-actions/release-please-action@v3
id: release
with:
# Required input to specify the release type. This is not really a
# go project, but go projects in release-please only update
# CHANGELOG.md and nothing else. This is what we want.
release-type: go
# Make sure we create the PR against the correct branch.
default-branch: ${{ github.ref_name }}
# Use a special shaka-bot access token for releases.
token: ${{ secrets.RELEASE_PLEASE_TOKEN || secrets.GITHUB_TOKEN }}
# Temporary settings to bootstrap v3.0.0.
last-release-sha: 634af6591ce8c701587a78042ae7f81761725710
bootstrap-sha: 634af6591ce8c701587a78042ae7f81761725710
# The jobs below are all conditional on a release having been created by
# someone merging the release PR.
# Several actions either only run on the latest release or run with different
# options on the latest release. Here we compute if this is the highest
# version number (what we are calling "latest" for NPM, Docker, and the
# docs). You can have a more recent release from an older branch, but this
# would not qualify as "latest" here.
compute:
name: Compute latest release flag
runs-on: ubuntu-latest
needs: release
if: needs.release.outputs.release_created
outputs:
latest: ${{ steps.compute.outputs.latest }}
steps:
- uses: actions/checkout@v3
with:
fetch-tags: true
persist-credentials: false
- name: Compute latest
id: compute
run: |
GIT_TAG_NAME=${{ needs.release.outputs.tag_name }}
RELEASE_TAGS=$(git tag | grep ^v[0-9])
LATEST_RELEASE=$(echo "$RELEASE_TAGS" | sort --version-sort | tail -1)
if [[ "$GIT_TAG_NAME" == "$LATEST_RELEASE" ]]; then
LATEST=true
else
LATEST=false
fi
echo latest=$LATEST >> $GITHUB_OUTPUT
# Debug the decisions made here.
echo "This release: $GIT_TAG_NAME"
echo "Latest release: $LATEST_RELEASE"
echo "This release is latest: $LATEST"
# Publish docs to GitHub Pages
docs:
name: Update docs
needs: [release, compute]
# Only if this is the latest release
if: needs.release.outputs.release_created && needs.compute.outputs.latest
uses: ./.github/workflows/publish-docs.yaml
with:
ref: ${{ github.ref }}
# Publish official docker image
docker:
name: Update docker image
needs: [release, compute]
if: needs.release.outputs.release_created
uses: ./.github/workflows/publish-docker.yaml
with:
tag: ${{ needs.release.outputs.tag_name }}
latest: ${{ needs.compute.outputs.latest == 'true' }}
secrets:
DOCKERHUB_CI_USERNAME: ${{ secrets.DOCKERHUB_CI_USERNAME }}
DOCKERHUB_CI_TOKEN: ${{ secrets.DOCKERHUB_CI_TOKEN }}
DOCKERHUB_PACKAGE_NAME: ${{ secrets.DOCKERHUB_PACKAGE_NAME }}
# Do a complete build
build:
name: Build
needs: release
if: needs.release.outputs.release_created
uses: ./.github/workflows/build.yaml
with:
ref: ${{ github.ref }}
# Attach build artifacts to the release
artifacts:
name: Artifacts
runs-on: ubuntu-latest
needs: [release, build]
if: needs.release.outputs.release_created
steps:
- uses: actions/download-artifact@v3
with:
path: artifacts
- name: Debug
run: find -ls
- name: Attach packager to release
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
tag: ${{ needs.release.outputs.tag_name }}
make_latest: false # Already set for the release
file_glob: true
file: artifacts/artifacts*/*
overwrite: true
# Surprisingly, Shaka Packager binaries can be installed via npm.
# Publish NPM package updates.
npm:
name: Update NPM
needs: [release, compute, artifacts]
if: needs.release.outputs.release_created
uses: ./.github/workflows/publish-npm.yaml
with:
tag: ${{ needs.release.outputs.tag_name }}
latest: ${{ needs.compute.outputs.latest == 'true' }}
secrets:
NPM_CI_TOKEN: ${{ secrets.NPM_CI_TOKEN }}
NPM_PACKAGE_NAME: ${{ secrets.NPM_PACKAGE_NAME }}

46
.github/workflows/settings.yaml vendored Normal file
View File

@ -0,0 +1,46 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A reusable workflow to extract settings from a repository.
# To enable a setting, create a "GitHub Environment" with the same name.
# This is a hack to enable per-repo settings that aren't copied to a fork.
# Without this, test workflows for a fork would time out waiting for
# self-hosted runners that the fork doesn't have.
name: Settings
# Runs when called from another workflow.
on:
workflow_call:
outputs:
self_hosted:
description: "Enable jobs requiring a self-hosted runner."
value: ${{ jobs.settings.outputs.self_hosted }}
debug:
description: "Enable SSH debugging when a workflow fails."
value: ${{ jobs.settings.outputs.debug }}
jobs:
settings:
runs-on: ubuntu-latest
outputs:
self_hosted: ${{ steps.settings.outputs.self_hosted }}
debug: ${{ steps.settings.outputs.debug }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- id: settings
run: |
environments=$(gh api /repos/${{ github.repository }}/environments)
for name in self_hosted debug; do
exists=$(echo $environments | jq ".environments[] | select(.name == \"$name\")")
if [[ "$exists" != "" ]]; then
echo "$name=true" >> $GITHUB_OUTPUT
echo "\"$name\" enabled."
else
echo "$name=" >> $GITHUB_OUTPUT
echo "\"$name\" disabled."
fi
done

View File

@ -0,0 +1,74 @@
# Copyright 2022 Google LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# A workflow to test building in various Linux distros.
name: Test Linux Distros
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
# Configure the build matrix based on files in the repo.
docker_matrix_config:
name: Matrix configuration
runs-on: ubuntu-latest
outputs:
MATRIX: ${{ steps.configure.outputs.MATRIX }}
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.ref }}
- name: Configure Build Matrix
id: configure
shell: node {0}
run: |
const fs = require('fs');
const files = fs.readdirSync('packager/testing/dockers/');
const matrix = files.map((file) => {
return { os_name: file.replace('_Dockerfile', '') };
});
// Output a JSON object consumed by the build matrix below.
fs.writeFileSync(process.env.GITHUB_OUTPUT,
`MATRIX=${ JSON.stringify(matrix) }`,
{flag: 'a'});
// Log the outputs, for the sake of debugging this script.
console.log({matrix});
# Build each dockerfile in parallel in a different CI job.
build:
needs: docker_matrix_config
strategy:
# Let other matrix entries complete, so we have all results on failure
# instead of just the first failure.
fail-fast: false
matrix:
include: ${{ fromJSON(needs.docker_matrix_config.outputs.MATRIX) }}
name: ${{ matrix.os_name }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.ref }}
submodules: recursive
- name: Build in Docker
run: ./packager/testing/test_dockers.sh "${{ matrix.os_name }}"

View File

@ -1,50 +0,0 @@
name: Update Docs
# Runs when a new release is published on GitHub.
#
# Pushes updated docs to GitHub Pages if triggered from a release workflow.
#
# Can also be run manually for debugging purposes.
on:
release:
types: [published]
# For manual debugging:
workflow_dispatch:
inputs:
ref:
description: "The ref to build docs from."
required: True
jobs:
publish_docs:
name: Build updated docs
runs-on: ubuntu-latest
steps:
- name: Compute ref
id: ref
# We could be building from a workflow dispatch (manual run) or from a
# release event. Subsequent steps can refer to the "ref" output of
# this job to determine the correct ref in all cases.
run: |
echo "::set-output name=ref::${{ github.event.inputs.ref || github.event.release.tag_name }}"
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ steps.ref.outputs.ref }}
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Build docs
uses: ./src/.github/workflows/custom-actions/build-docs
- name: Deploy to gh-pages branch
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: gh-pages
full_commit_message: Generate docs for ${{ steps.ref.outputs.ref }}

30
.gitignore vendored
View File

@ -1,37 +1,19 @@
*.pyc
*.sln
*.swp
*.VC.db
*.vcxproj*
*/.vs/*
*~
.DS_store
.cache
.cproject
.project
.pydevproject
.idea
.repo
.settings
/out*
/packager/base/
/packager/build/
/packager/buildtools/third_party/libc++/trunk/
/packager/buildtools/third_party/libc++abi/trunk/
/packager/docs/
/packager/testing/gmock/
/packager/testing/gtest/
/packager/third_party/binutils/
/packager/third_party/boringssl/src/
/packager/third_party/curl/source/
/packager/third_party/gflags/src/
/packager/third_party/gold/
/packager/third_party/icu/
/packager/third_party/libpng/src/
/packager/third_party/libwebm/src/
/packager/third_party/llvm-build/
/packager/third_party/modp_b64/
/packager/third_party/tcmalloc/
/packager/third_party/yasm/source/patched-yasm/
/packager/third_party/zlib/
/packager/tools/clang/
/packager/tools/gyp/
/packager/tools/valgrind/
build/
junit-reports/
packager/docs/
.vscode/

36
.gitmodules vendored
View File

@ -0,0 +1,36 @@
[submodule "packager/third_party/googletest/source"]
path = packager/third_party/googletest/source
url = https://github.com/google/googletest
[submodule "packager/third_party/abseil-cpp/source"]
path = packager/third_party/abseil-cpp/source
url = https://github.com/abseil/abseil-cpp
[submodule "packager/third_party/curl/source"]
path = packager/third_party/curl/source
url = https://github.com/curl/curl
[submodule "packager/third_party/json/source"]
path = packager/third_party/json/source
url = https://github.com/nlohmann/json
[submodule "packager/third_party/mbedtls/source"]
path = packager/third_party/mbedtls/source
url = https://github.com/Mbed-TLS/mbedtls
[submodule "packager/third_party/zlib/source"]
path = packager/third_party/zlib/source
url = https://github.com/joeyparrish/zlib
[submodule "packager/third_party/libpng/source"]
path = packager/third_party/libpng/source
url = https://github.com/glennrp/libpng
[submodule "packager/third_party/libwebm/source"]
path = packager/third_party/libwebm/source
url = https://github.com/webmproject/libwebm
[submodule "packager/third_party/libxml2/source"]
path = packager/third_party/libxml2/source
url = https://github.com/GNOME/libxml2
[submodule "packager/third_party/protobuf/source"]
path = packager/third_party/protobuf/source
url = https://github.com/protocolbuffers/protobuf
[submodule "packager/third_party/mongoose/source"]
path = packager/third_party/mongoose/source
url = https://github.com/cesanta/mongoose
[submodule "packager/third_party/c-ares/source"]
path = packager/third_party/c-ares/source
url = https://github.com/c-ares/c-ares

View File

@ -21,20 +21,22 @@ Audible <*@audible.com>
Cyfrowy Polsat SA <*@cyfrowypolsat.pl>
Chun-da Chen <capitalm.c@gmail.com>
Daniel Cantarín <canta@canta.com.ar>
Dennis E. Mungai (Brainiarc7) <dmngaie@gmail.com>
Dolby Laboratories <*@dolby.com>
Evgeny Zajcev <zevlg@yandex.ru>
Eyevinn Technology AB <*@eyevinn.se>
Google Inc. <*@google.com>
Google LLC. <*@google.com>
Ivi.ru LLC <*@ivi.ru>
Leandro Moreira <leandro.ribeiro.moreira@gmail.com>
Leo Law <leoltlaw.gh@gmail.com>
Meta Platforms, Inc. <*@meta.com>
More Screens Ltd. <*@morescreens.net>
Ole Andre Birkedal <o.birkedal@sportradar.com>
Philo Inc. <*@philo.com>
Piotr Srebrny <srebrny.piotr@gmail.com>
Prakash Duggaraju <duggaraju@gmail.com>
Richard Eklycke <richard@eklycke.se>
Sanil Raut <sr1990003@gmail.com>
Sergio Ammirata <sergio@ammirata.net>
The Chromium Authors <*@chromium.org>
Prakash Duggaraju <duggaraju@gmail.com>
Dennis E. Mungai (Brainiarc7) <dmngaie@gmail.com>

View File

@ -1,3 +1,6 @@
# Changelog
## [2.6.1] - 2021-10-14
### Fixed
- Fix crash in static-linked linux builds (#996)

29
CMakeLists.txt Normal file
View File

@ -0,0 +1,29 @@
# Copyright 2022 Google LLC. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# Root-level CMake build file.
# Minimum CMake version. This must be in the root level CMakeLists.txt.
cmake_minimum_required(VERSION 3.16)
# These policy settings should be included before the project definition.
include("packager/policies.cmake")
# Project name. May not contain spaces. Versioning is managed elsewhere.
project(shaka-packager VERSION "")
# The only build option for Shaka Packager is whether to build a shared
# libpackager library. By default, don't.
option(BUILD_SHARED_LIBS "Build libpackager as a shared library" OFF)
# Enable CMake's test infrastructure.
enable_testing()
option(SKIP_INTEGRATION_TESTS "Skip the packager integration tests" ON)
# Subdirectories with their own CMakeLists.txt
add_subdirectory(packager)
add_subdirectory(link-test)

View File

@ -27,8 +27,10 @@ Anders Hasselqvist <anders.hasselqvist@gmail.com>
Andreas Motl <andreas.motl@elmyra.de>
Bei Li <beil@google.com>
Chun-da Chen <capitalm.c@gmail.com>
Cosmin Stejerean <cstejerean@meta.com>
Daniel Cantarín <canta@canta.com.ar>
David Cavar <pal3thorn@gmail.com>
Dennis E. Mungai (Brainiarc7) <dmngaie@gmail.com>
Evgeny Zajcev <zevlg@yandex.ru>
Gabe Kopley <gabe@philo.com>
Geoff Jukes <geoff@jukes.org>
@ -43,6 +45,7 @@ Marcus Spangenberg <marcus.spangenberg@eyevinn.se>
Michal Wierzbicki <mwierzbicki1@cyfrowypolsat.pl>
Ole Andre Birkedal <o.birkedal@sportradar.com>
Piotr Srebrny <srebrny.piotr@gmail.com>
Prakash Duggaraju <duggaraju@gmail.com>
Qingquan Wang <wangqq1103@gmail.com>
Richard Eklycke <richard@eklycke.se>
Rintaro Kuroiwa <rkuroiwa@google.com>
@ -53,5 +56,4 @@ Thomas Inskip <tinskip@google.com>
Tim Lansen <tim.lansen@gmail.com>
Vincent Nguyen <nvincen@amazon.com>
Weiguo Shao <weiguo.shao@dolby.com>
Prakash Duggaraju <duggaraju@gmail.com>
Dennis E. Mungai (Brainiarc7) <dmngaie@gmail.com>

92
DEPS
View File

@ -1,92 +0,0 @@
# Copyright 2014 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
#
# Packager dependencies.
vars = {
"chromium_git": "https://chromium.googlesource.com",
"github": "https://github.com",
}
deps = {
"src/packager/base":
Var("chromium_git") + "/chromium/src/base@a34eabec0d807cf03dc8cfc1a6240156ac2bbd01", #409071
"src/packager/build":
Var("chromium_git") + "/chromium/src/build@f0243d787961584ac95a86e7dae897b9b60ea674", #409966
"src/packager/testing/gmock":
Var("chromium_git") + "/external/googlemock@0421b6f358139f02e102c9c332ce19a33faf75be", #566
"src/packager/testing/gtest":
Var("chromium_git") + "/external/github.com/google/googletest@6f8a66431cb592dad629028a50b3dd418a408c87",
# Make sure the version matches the one in
# src/packager/third_party/boringssl, which contains perl generated files.
"src/packager/third_party/boringssl/src":
Var("github") + "/google/boringssl@76918d016414bf1d71a86d28239566fbcf8aacf0",
"src/packager/third_party/curl/source":
Var("github") + "/curl/curl@62c07b5743490ce373910f469abc8cdc759bec2b", #7.57.0
"src/packager/third_party/gflags/src":
Var("chromium_git") + "/external/github.com/gflags/gflags@03bebcb065c83beff83d50ae025a55a4bf94dfca",
# Required by libxml.
"src/packager/third_party/icu":
Var("chromium_git") + "/chromium/deps/icu@ef5c735307d0f86c7622f69620994c9468beba99",
"src/packager/third_party/libpng/src":
Var("github") + "/glennrp/libpng@a40189cf881e9f0db80511c382292a5604c3c3d1",
"src/packager/third_party/libwebm/src":
Var("chromium_git") + "/webm/libwebm@d6af52a1e688fade2e2d22b6d9b0c82f10d38e0b",
"src/packager/third_party/modp_b64":
Var("chromium_git") + "/chromium/src/third_party/modp_b64@aae60754fa997799e8037f5e8ca1f56d58df763d", #405651
"src/packager/third_party/tcmalloc/chromium":
Var("chromium_git") + "/chromium/src/third_party/tcmalloc/chromium@58a93bea442dbdcb921e9f63e9d8b0009eea8fdb", #374449
"src/packager/third_party/zlib":
Var("chromium_git") + "/chromium/src/third_party/zlib@830b5c25b5fbe37e032ea09dd011d57042dd94df", #408157
"src/packager/tools/gyp":
Var("chromium_git") + "/external/gyp@caa60026e223fc501e8b337fd5086ece4028b1c6",
}
deps_os = {
"win": {
# Required by boringssl.
"src/packager/third_party/yasm/source/patched-yasm":
Var("chromium_git") + "/chromium/deps/yasm/patched-yasm.git@7da28c6c7c6a1387217352ce02b31754deb54d2a",
},
}
hooks = [
{
# When using CC=clang CXX=clang++, there is a binutils version check that
# does not work correctly in common.gypi. Since we are stuck with a very
# old version of chromium/src/build, there is nothing to do but patch it to
# remove the check. Thankfully, this version number does not control
# anything critical in the build settings as far as we can tell.
'name': 'patch-binutils-version-check',
'pattern': '.',
'action': ['sed', '-e', 's/<!pymod_do_main(compiler_version target assembler)/0/', '-i.bk', 'src/packager/build/common.gypi'],
},
{
# A change to a .gyp, .gypi, or to GYP itself should run the generator.
"pattern": ".",
"action": ["python", "src/gyp_packager.py", "--depth=src/packager"],
},
{
# Update LASTCHANGE.
'name': 'lastchange',
'pattern': '.',
'action': ['python', 'src/packager/build/util/lastchange.py',
'-o', 'src/packager/build/util/LASTCHANGE'],
},
]

View File

@ -1,46 +1,29 @@
FROM alpine:3.11 as builder
FROM alpine:3.12 as builder
# Install utilities, libraries, and dev tools.
RUN apk add --no-cache \
bash curl \
bsd-compat-headers c-ares-dev linux-headers \
build-base git ninja python2 python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
WORKDIR /
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH $PATH:/depot_tools
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
# Alpine uses musl which does not have mallinfo defined in malloc.h. Define the
# structure to workaround a Chromium base bug.
RUN sed -i \
'/malloc_usable_size/a \\nstruct mallinfo {\n int arena;\n int hblkhd;\n int uordblks;\n};' \
/usr/include/malloc.h
ENV GYP_DEFINES='musl=1'
bsd-compat-headers linux-headers \
build-base cmake git ninja python3
# Build shaka-packager from the current directory, rather than what has been
# merged.
WORKDIR shaka_packager
RUN gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
COPY . src
RUN gclient sync --force
RUN ninja -C src/out/Release
WORKDIR shaka-packager
COPY . /shaka-packager/
RUN rm -rf build
RUN cmake -S . -B build -DCMAKE_BUILD_TYPE=Debug -G Ninja
RUN cmake --build build/ --config Debug --parallel
# Copy only result binaries to our final image.
FROM alpine:3.11
RUN apk add --no-cache libstdc++ python
COPY --from=builder /shaka_packager/src/out/Release/packager \
/shaka_packager/src/out/Release/mpd_generator \
/shaka_packager/src/out/Release/pssh-box.py \
FROM alpine:3.12
RUN apk add --no-cache libstdc++ python3
COPY --from=builder /shaka-packager/build/packager/packager \
/shaka-packager/build/packager/mpd_generator \
/shaka-packager/build/packager/pssh-box.py \
/usr/bin/
# Copy pyproto directory, which is needed by pssh-box.py script. This line
# cannot be combined with the line above as Docker's copy command skips the
# directory itself. See https://github.com/moby/moby/issues/15858 for details.
COPY --from=builder /shaka_packager/src/out/Release/pyproto /usr/bin/pyproto
COPY --from=builder /shaka-packager/build/packager/pssh-box-protos \
/usr/bin/pssh-box-protos

View File

@ -1,4 +1,4 @@
Copyright 2014, Google Inc. All rights reserved.
Copyright 2014, Google LLC. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
@ -10,7 +10,7 @@ notice, this list of conditions and the following disclaimer.
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
* Neither the name of Google LLC. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
@ -42,7 +42,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// * Neither the name of Google LLC. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//

View File

@ -10,7 +10,7 @@
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// * Neither the name of Google LLC. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//

View File

@ -58,7 +58,7 @@ PROJECT_LOGO =
# entered, it will be relative to the location where doxygen was started. If
# left blank the current directory will be used.
OUTPUT_DIRECTORY = out/doxygen
OUTPUT_DIRECTORY = build/doxygen
# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub-
# directories (in 2 levels) under the output directory of each output format and

View File

@ -6,7 +6,7 @@ SPHINXOPTS =
SPHINXBUILD = python3 -msphinx
SPHINXPROJ = ShakaPackager
SOURCEDIR = source
BUILDDIR = ../out/sphinx
BUILDDIR = ../build/sphinx
# Put it first so that "make" without argument is like "make help".
help:

View File

@ -1,6 +1,6 @@
<!DOCTYPE html>
<!--
Copyright 2021 Google Inc
Copyright 2021 Google LLC
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file or at

View File

@ -1,16 +1,13 @@
# Linux Profiling
Profiling code is enabled when the `use_allocator` variable in gyp is set to
`tcmalloc` and `profiling` variable in gyp is set to `1`. That will build the
tcmalloc library, including the cpu profiling and heap profiling code into
shaka-packager, e.g.
In theory we should be able to build packager using
[gperftools](https://github.com/gperftools/gperftools/tree/master) to
get back the profiling functionality described below. However actually
integrating this into the CMake build is not yet done. Pull requests
welcome. See https://github.com/shaka-project/shaka-packager/issues/1277
GYP_DEFINES='profiling=1 use_allocator="tcmalloc"' gclient runhooks
If the stack traces in your profiles are incomplete, this may be due to missing
frame pointers in some of the libraries. A workaround is to use the
`linux_keep_shadow_stacks=1` gyp option. This will keep a shadow stack using the
`-finstrument-functions` option of gcc and consult the stack when unwinding.
If packager was linked using `-ltcmalloc` then the following
instructions should work:
## CPU Profiling
@ -53,21 +50,11 @@ catch those, use the `HEAP_PROFILE_ALLOCATION_INTERVAL` environment variable.
To programmatically generate a heap profile before exit, use code like:
#include "packager/third_party/tcmalloc/chromium/src/gperftools/heap-profiler.h"
#include <gperftools/heap-profiler.h>
// "foobar" will be included in the message printed to the console
HeapProfilerDump("foobar");
Then add allocator.gyp dependency to the target with the above change:
'conditions': [
['profiling==1', {
'dependencies': [
'base/allocator/allocator.gyp:allocator',
],
}],
],
Or you can use gdb to attach at any point:
1. Attach gdb to the process: `$ gdb -p 12345`
@ -79,31 +66,18 @@ Or you can use gdb to attach at any point:
## Thread sanitizer (tsan)
To compile with the thread sanitizer library (tsan), you must set clang as your
compiler and set the `tsan=1` and `tsan_blacklist` configs:
CC=clang CXX=clang++ GYP_DEFINES="tsan=1 tsan_blacklist=/path/to/src/packager/tools/memory/tsan_v2/ignores.txt" gclient runhooks
compiler and set `-fsanitize=thread` in compiler flags.
NOTE: tsan and asan cannot be used at the same time.
## Adddress sanitizer (asan)
To compile with the address sanitizer library (asan), you must set clang as your
compiler and set the `asan=1` config:
CC=clang CXX=clang++ GYP_DEFINES="asan=1" gclient runhooks
compiler and set `-fsanitize=address` in compiler and linker flags.
NOTE: tsan and asan cannot be used at the same time.
## Leak sanitizer (lsan)
To compile with the leak sanitizer library (lsan), you must set clang as your
compiler and set the `lsan=1` config:
CC=clang CXX=clang++ GYP_DEFINES="lsan=1" gclient runhooks
## Reference
[Linux Profiling in Chromium](https://chromium.googlesource.com/chromium/src/+/master/docs/linux_profiling.md)
compiler and use `-fsanitize=leak` in compiler and linker flags.

View File

@ -4,7 +4,7 @@ Shaka Packager supports building on Windows, Mac and Linux host systems.
## Linux build dependencies
Most development is done on Ubuntu (currently 14.04, Trusty Tahr). The
Most development is done on Ubuntu (currently 22.04 LTS, Jammy Jellyfish). The
dependencies mentioned here are only for Ubuntu. There are some instructions
for [other distros below](#notes-for-other-linux-distros).
@ -12,226 +12,157 @@ for [other distros below](#notes-for-other-linux-distros).
sudo apt-get update
sudo apt-get install -y \
curl \
libc-ares-dev \
build-essential git python python3
build-essential cmake git ninja-build python3
```
Note that `Git` must be v1.7.5 or above.
Note that `git` must be v1.7.6 or above to support relative paths in submodules.
## Mac system requirements
* [Xcode](https://developer.apple.com/xcode) 7.3+.
* The OS X 10.10 SDK or later. Run
* [Xcode](https://developer.apple.com/xcode) 7.3+.
* The OS X 10.10 SDK or later. Run
```shell
ls `xcode-select -p`/Platforms/MacOSX.platform/Developer/SDKs
```
```shell
ls `xcode-select -p`/Platforms/MacOSX.platform/Developer/SDKs
```
to check whether you have it.
to check whether you have it.
* Note that there is a known problem with 10.15 SDK or later right now. You
can workaround it by using 10.14 SDK. See
[#660](https://github.com/shaka-project/shaka-packager/issues/660#issuecomment-552576341)
for details.
## Install Ninja (recommended) using Homebrew
```shell
brew install ninja
```
## Windows system requirements
* Visual Studio 2015 Update 3, 2017, or 2019. (See below.)
* Windows 7 or newer.
* Visual Studio 2017 or newer.
* Windows 10 or newer.
Install Visual Studio 2015 Update 3 or later - Community Edition should work if
its license is appropriate for you. Use the Custom Install option and select:
Recommended version of Visual Studio is 2022, the Community edition
should work for open source development of tools like Shaka Packager
but please check the Community license terms for your specific
situation.
- Visual C++, which will select three sub-categories including MFC
- Universal Windows Apps Development Tools > Tools (1.4.1) and Windows 10 SDK
(10.0.14393)
Install the "Desktop development with C++" workload which will install
CMake and other needed tools.
If using VS 2017 or VS 2019, you must set the following environment variables,
with versions and paths adjusted to match your actual system:
If you use chocolatey, you can install these dependencies with:
```shell
GYP_MSVS_VERSION="2019"
GYP_MSVS_OVERRIDE_PATH="C:/Program Files (x86)/Microsoft Visual Studio/2019/Community"
```ps1
choco install -y `
git cmake ninja python `
visualstudio2022community visualstudio2022-workload-nativedesktop `
visualstudio2022buildtools windows-sdk-10.0
# Find python install
$pythonpath = Get-Item c:\Python* | sort CreationDate | Select-Object -First 1
# Symlink python3 to python
New-Item -ItemType SymbolicLink `
-Path "$pythonpath/python3.exe" -Target "$pythonpath/python.exe"
# Update global PATH
$env:PATH += ";C:\Program Files\Git\bin;c:\Program Files\CMake\bin;$pythonpath"
setx PATH "$env:PATH"
```
## Install `depot_tools`
Clone a particular branch of the `depot_tools` repository from Chromium:
```shell
git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
touch depot_tools/.disable_auto_update
```
The latest version of depot_tools will not work, so please use that branch!
### Linux and Mac
Add `depot_tools` to the end of your PATH (you will probably want to put this
in your `~/.bashrc` or `~/.zshrc`). Assuming you cloned `depot_tools` to
`/path/to/depot_tools`:
```shell
export PATH="$PATH:/path/to/depot_tools"
```
### Windows
Add depot_tools to the start of your PATH (must be ahead of any installs of
Python). Assuming you cloned the repo to C:\src\depot_tools, open:
Control Panel → System and Security → System → Advanced system settings
If you have Administrator access, Modify the PATH system variable and
put `C:\src\depot_tools` at the front (or at least in front of any directory
that might already have a copy of Python or Git).
If you don't have Administrator access, you can add a user-level PATH
environment variable and put `C:\src\depot_tools` at the front, but
if your system PATH has a Python in it, you will be out of luck.
From a cmd.exe shell, run the command gclient (without arguments). On first
run, gclient will install all the Windows-specific bits needed to work with
the code, including msysgit and python.
* If you run gclient from a non-cmd shell (e.g., cygwin, PowerShell),
it may appear to run properly, but msysgit, python, and other tools
may not get installed correctly.
* If you see strange errors with the file system on the first run of gclient,
you may want to
[disable Windows Indexing](http://tortoisesvn.tigris.org/faq.html#cantmove2).
## Get the code
Create a `shaka_packager` directory for the checkout and change to it (you can
call this whatever you like and put it wherever you like, as long as the full
path has no spaces):
Dependencies are now managed via git submodules. To get a complete
checkout you can run:
```shell
mkdir shaka_packager && cd shaka_packager
```
Run the `gclient` tool from `depot_tools` to check out the code and its
dependencies.
```shell
gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
gclient sync -r main
```
To sync to a particular commit or version, add the '-r \<revision\>' flag to
`gclient sync`, e.g.
```shell
gclient sync -r 4cb5326355e1559d60b46167740e04624d0d2f51
```
```shell
gclient sync -r v1.2.0
```
If you don't want the full repo history, you can save some time by adding the
`--no-history` flag to `gclient sync`.
When the above commands completes, it will have created a hidden `.gclient` file
and a directory called `src` in the working directory. The remaining
instructions assume you have switched to the `src` directory:
```shell
cd src
git clone --recurse-submodules https://github.com/shaka-project/shaka-packager.git
```
### Build Shaka Packager
#### Linux and Mac
Shaka Packager uses [Ninja](https://ninja-build.org) as its main build tool,
which is bundled in depot_tools.
Shaka Packager uses [CMake](https://cmake.org) as the main build tool,
with Ninja as the recommended generator (outside of Windows).
To build the code, run `ninja` command:
```shell
ninja -C out/Release
cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
```
If you want to build debug code, replace `Release` above with `Debug`.
We also provide a mechanism to change build settings, for example,
you can change build system to `make` by overriding `GYP_GENERATORS`:
You can change other build settings with `-D` flags to CMake, for example
you can build a shared `libpackager` instead of static by adding
```shell
GYP_GENERATORS='make' gclient runhooks
-DBUILD_SHARED_LIBS="ON"
```
After configuring CMake you can run the build with
```shell
cmake --build build --parallel
```
#### Windows
The instructions are similar, except that Windows allows using either `/` or `\`
as path separator:
Windows build instructions are similar. Using Tools > Command Line >
Developer Command Prompt should open a terminal with cmake and ctest in the
PATH. Omit the `-G Ninja` to use the default backend, and pass `--config`
during build to select the desired configuration from Visual Studio.
```shell
ninja -C out/Release
ninja -C out\Release
```
Also, unlike Linux / Mac, 32-bit is chosen by default even if the system is
64-bit. 64-bit has to be enabled explicitly and the output directory is
configured to `out/%CONFIGURATION%_x64`, i.e.:
```shell
SET GYP_DEFINES='target_arch=x64'
gclient runhooks
ninja -C out/Release_x64
cmake -B build
cmake --build build --parallel --config Release
```
### Build artifacts
After a successful build, you can find build artifacts including the main
`packager` binary in build output directory (`out/Release` or `out/Release_x64`
for release build).
`packager` binary in build output directory (`build/packager/` for a Ninja
build, `build/packager/Release/` for a Visual Studio release build, or
`build/packager/Debug/` for a Visual Studio debug build).
See [Shaka Packager Documentation](https://shaka-project.github.io/shaka-packager/html/)
on how to use `Shaka Packager`.
### Installation
To install Shaka Packager, run:
```shell
cmake --install build/ --strip --config Release
```
You can customize the output location with `--prefix` (default `/usr/local` on
Linux and macOS) and the `DESTDIR` environment variable. These are provided by
CMake and follow standard conventions for installation. For example, to build
a package by installing to `foo` instead of the system root, and to use `/usr`
instead of `/usr/local`, you could run:
```shell
DESTDIR=foo cmake --install build/ --strip --config Release --prefix=/usr
```
### Update your checkout
To update an existing checkout, you can run
```shell
git pull origin main --rebase
gclient sync
git submodule update --init --recursive
```
The first command updates the primary Packager source repository and rebases on
top of tip-of-tree (aka the Git branch `origin/main`). You can also use other
common Git commands to update the repo.
The second command syncs dependencies to the appropriate versions and re-runs
hooks as needed.
## Cross compiling for ARM on Ubuntu host
The install-build-deps script can be used to install all the compiler
and library dependencies directly from Ubuntu:
```shell
./packager/build/install-build-deps.sh
```
Install sysroot image and others using `gclient`:
```shell
GYP_CROSSCOMPILE=1 GYP_DEFINES="target_arch=arm" gclient runhooks
```
The build command is the same as in Ubuntu:
```shell
ninja -C out/Release
```
The second updates submodules for third-party dependencies.
## Notes for other linux distros
The docker files at `packager/testing/dockers` have the most up to
date commands for installing dependencies. For example:
### Alpine Linux
Use `apk` command to install dependencies:
@ -239,24 +170,8 @@ Use `apk` command to install dependencies:
```shell
apk add --no-cache \
bash curl \
bsd-compat-headers c-ares-dev linux-headers \
build-base git ninja python2 python3
```
Alpine uses musl which does not have mallinfo defined in malloc.h. It is
required by one of Shaka Packager's dependency. To workaround the problem, a
dummy structure has to be defined in /usr/include/malloc.h, e.g.
```shell
sed -i \
'/malloc_usable_size/a \\nstruct mallinfo {\n int arena;\n int hblkhd;\n int uordblks;\n};' \
/usr/include/malloc.h
```
We also need to enable musl in the build config:
```shell
export GYP_DEFINES='musl=1'
bsd-compat-headers linux-headers \
build-base cmake git ninja python3
```
### Arch Linux
@ -264,40 +179,67 @@ export GYP_DEFINES='musl=1'
Instead of running `sudo apt-get install` to install build dependencies, run:
```shell
sudo pacman -Sy --needed \
pacman -Suy --needed --noconfirm \
core/which \
c-ares \
gcc git python2 python3
cmake gcc git ninja python3
```
### Debian
Same as Ubuntu.
```shell
apt-get install -y \
curl \
build-essential cmake git ninja-build python3
```
### Fedora
Instead of running `sudo apt-get install` to install build dependencies, run:
```shell
su -c 'yum install -y \
yum install -y \
which \
c-ares-devel libatomic \
gcc-c++ git python2'
libatomic \
cmake gcc-c++ git ninja-build python3
```
### CentOS
Same as Fedora.
For CentOS, Ninja is only available from the CRB (Code Ready Builder) repo
```shell
dnf update -y
dnf install -y yum-utils
dnf config-manager --set-enabled crb
```
then same as Fedora
```shell
yum install -y \
which \
libatomic \
cmake gcc-c++ git ninja-build python3
```
### OpenSUSE
Use `zypper` command to install dependencies:
```shell
sudo zypper in -y \
zypper in -y \
curl which \
c-ares-devel \
gcc-c++ git python python3
cmake gcc9-c++ git ninja python3
```
OpenSuse 15 doesn't have the required gcc 9+ by default, but we can install
it as gcc9 and symlink it.
```shell
ln -s g++-9 /usr/bin/g++
ln -s gcc-9 /usr/bin/gcc
```
## Tips, tricks, and troubleshooting
@ -322,41 +264,10 @@ Only accepting for all users of the machine requires root:
sudo xcodebuild -license
```
### Missing curl CA bundle
If you are getting the error
> gyp: Call to 'config/mac/find_curl_ca_bundle.sh' returned exit status 1 ...
curl CA bundle is not able to be located. Installing curl with openssl should
resolve the issue:
```shell
brew install curl --with-openssl
```
### Using an IDE
No specific instructions are available.
You might find Gyp generators helpful. Output is not guaranteed to work.
Manual editing might be necessary.
To generate CMakeLists.txt in out/Release and out/Debug use:
```shell
GYP_GENERATORS=cmake gclient runhooks
```
To generate IDE project files in out/Release and out/Debug use:
```shell
GYP_GENERATORS=eclipse gclient runhooks
GYP_GENERATORS=xcode gclient runhooks
GYP_GENERATORS=xcode_test gclient runhooks
GYP_GENERATORS=msvs gclient runhooks
GYP_GENERATORS=msvs_test gclient runhooks
```
No specific instructions are available. However most IDEs with CMake
support should work out of the box
## Contributing
@ -367,12 +278,22 @@ details.
We have continue integration tests setup on pull requests. You can also verify
locally by running the tests manually.
If you know which tests are affected by your change, you can limit which tests
are run using the `--gtest_filter` arg, e.g.:
```shell
out/Debug/mp4_unittest --gtest_filter="MP4MediaParserTest.*"
ctest -C Debug -V --test-dir build
```
You can find out more about GoogleTest at its
[GitHub page](https://github.com/google/googletest).
You should install `clang-format` (using `apt install` or `brew
install` depending on platform) to ensure that all code changes are
properly formatted.
You should commit or stage (with `git add`) any code changes first. Then run
```shell
git clang-format --style Chromium origin/main
```
This will run formatting over just the files you modified (any changes
since origin/main).

View File

@ -1,124 +0,0 @@
#!/usr/bin/python3
#
# Copyright 2014 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
"""This script wraps gyp and sets up build environments.
Build instructions:
1. Setup gyp: ./gyp_packager.py or use gclient runhooks
Ninja is the default build system. User can also change to make by
overriding GYP_GENERATORS to make, i.e.
"GYP_GENERATORS='make' gclient runhooks".
2. The first step generates the make files but does not start the
build process. Ninja is the default build system. Refer to Ninja
manual on how to do the build.
Common syntaxes: ninja -C out/{Debug/Release} [Module]
Module is optional. If not specified, build everything.
Step 1 is only required if there is any gyp file change. Otherwise, you
may just run ninja.
"""
import os
import sys
checkout_dir = os.path.dirname(os.path.realpath(__file__))
src_dir = os.path.join(checkout_dir, 'packager')
# Workaround the dynamic path.
# pylint: disable=wrong-import-position
sys.path.insert(0, os.path.join(src_dir, 'build'))
import gyp_helper
sys.path.insert(0, os.path.join(src_dir, 'tools', 'gyp', 'pylib'))
import gyp
if __name__ == '__main__':
args = sys.argv[1:]
# Allow src/.../chromium.gyp_env to define GYP variables.
gyp_helper.apply_chromium_gyp_env()
# If we didn't get a gyp file, then fall back to assuming 'packager.gyp' from
# the same directory as the script.
if not any(arg.endswith('.gyp') for arg in args):
args.append(os.path.join(src_dir, 'packager.gyp'))
# Always include Chromium's common.gypi and our common.gypi.
args.extend([
'-I' + os.path.join(src_dir, 'build', 'common.gypi'),
'-I' + os.path.join(src_dir, 'common.gypi')
])
# Set these default GYP_DEFINES if user does not set the value explicitly.
_DEFAULT_DEFINES = {'test_isolation_mode': 'noop',
'use_custom_libcxx': 0,
'use_glib': 0,
'use_openssl': 1,
'use_sysroot': 0,
'use_x11': 0,
'linux_use_bundled_binutils': 0,
'linux_use_bundled_gold': 0,
'linux_use_gold_flags': 0,
'clang': 0,
'host_clang': 0,
'clang_xcode': 1,
'use_allocator': 'none',
'mac_deployment_target': '10.10',
'use_experimental_allocator_shim': 0,
'clang_use_chrome_plugins': 0}
gyp_defines_str = os.environ.get('GYP_DEFINES', '')
user_gyp_defines_map = {}
for term in gyp_defines_str.split(' '):
if term:
key, value = term.strip().split('=')
user_gyp_defines_map[key] = value
for key, value in _DEFAULT_DEFINES.items():
if key not in user_gyp_defines_map:
gyp_defines_str += ' {0}={1}'.format(key, value)
os.environ['GYP_DEFINES'] = gyp_defines_str.strip()
# Default to ninja, but only if no generator has explicitly been set.
if 'GYP_GENERATORS' not in os.environ:
os.environ['GYP_GENERATORS'] = 'ninja'
# By default, don't download our own toolchain for Windows.
if 'DEPOT_TOOLS_WIN_TOOLCHAIN' not in os.environ:
os.environ['DEPOT_TOOLS_WIN_TOOLCHAIN'] = '0'
# There shouldn't be a circular dependency relationship between .gyp files,
# but in Chromium's .gyp files, on non-Mac platforms, circular relationships
# currently exist. The check for circular dependencies is currently
# bypassed on other platforms, but is left enabled on the Mac, where a
# violation of the rule causes Xcode to misbehave badly.
if 'xcode' not in os.environ['GYP_GENERATORS']:
args.append('--no-circular-check')
# TODO(kqyang): Find a better way to handle the depth. This workaround works
# only if this script is executed in 'src' directory.
if not any('--depth' in arg for arg in args):
args.append('--depth=packager')
if 'output_dir=' not in os.environ.get('GYP_GENERATOR_FLAGS', ''):
output_dir = os.path.join(checkout_dir, 'out')
gyp_generator_flags = 'output_dir="' + output_dir + '"'
if os.environ.get('GYP_GENERATOR_FLAGS'):
os.environ['GYP_GENERATOR_FLAGS'] += ' ' + gyp_generator_flags
else:
os.environ['GYP_GENERATOR_FLAGS'] = gyp_generator_flags
print('Updating projects from gyp files...')
sys.stdout.flush()
# Off we go...
sys.exit(gyp.main(args))

6
include/README.md Normal file
View File

@ -0,0 +1,6 @@
# Public headers for libpackager
These are the public headers for libpackager. They can only reference other
public headers or standard system headers. They cannot reference internal
headers (in `packager/...`) or third-party dependency headers (in
`packager/third_party/...`).

View File

@ -1,11 +1,11 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_MEDIA_PUBLIC_AD_CUE_GENERATOR_PARAMS_H_
#define PACKAGER_MEDIA_PUBLIC_AD_CUE_GENERATOR_PARAMS_H_
#ifndef PACKAGER_PUBLIC_AD_CUE_GENERATOR_PARAMS_H_
#define PACKAGER_PUBLIC_AD_CUE_GENERATOR_PARAMS_H_
#include <vector>
@ -27,4 +27,4 @@ struct AdCueGeneratorParams {
} // namespace shaka
#endif // PACKAGER_MEDIA_PUBLIC_AD_CUE_GENERATOR_PARAMS_H_
#endif // PACKAGER_PUBLIC_AD_CUE_GENERATOR_PARAMS_H_

View File

@ -1,12 +1,13 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_FILE_PUBLIC_BUFFER_CALLBACK_PARAMS_H_
#define PACKAGER_FILE_PUBLIC_BUFFER_CALLBACK_PARAMS_H_
#ifndef PACKAGER_PUBLIC_BUFFER_CALLBACK_PARAMS_H_
#define PACKAGER_PUBLIC_BUFFER_CALLBACK_PARAMS_H_
#include <cstdint>
#include <functional>
namespace shaka {
@ -32,4 +33,4 @@ struct BufferCallbackParams {
} // namespace shaka
#endif // PACKAGER_FILE_PUBLIC_BUFFER_CALLBACK_PARAMS_H_
#endif // PACKAGER_PUBLIC_BUFFER_CALLBACK_PARAMS_H_

View File

@ -1,11 +1,11 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_MEDIA_PUBLIC_CHUNKING_PARAMS_H_
#define PACKAGER_MEDIA_PUBLIC_CHUNKING_PARAMS_H_
#ifndef PACKAGER_PUBLIC_CHUNKING_PARAMS_H_
#define PACKAGER_PUBLIC_CHUNKING_PARAMS_H_
namespace shaka {
@ -35,4 +35,4 @@ struct ChunkingParams {
} // namespace shaka
#endif // PACKAGER_MEDIA_PUBLIC_CHUNKING_PARAMS_H_
#endif // PACKAGER_PUBLIC_CHUNKING_PARAMS_H_

View File

@ -1,19 +1,18 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_MEDIA_PUBLIC_CRYPTO_PARAMS_H_
#define PACKAGER_MEDIA_PUBLIC_CRYPTO_PARAMS_H_
#ifndef PACKAGER_PUBLIC_CRYPTO_PARAMS_H_
#define PACKAGER_PUBLIC_CRYPTO_PARAMS_H_
#include <cstdint>
#include <functional>
#include <map>
#include <string>
#include <vector>
#include "packager/status.h"
namespace shaka {
/// Encryption key providers. These provide keys to decrypt the content if the
@ -237,4 +236,4 @@ struct DecryptionParams {
} // namespace shaka
#endif // PACKAGER_MEDIA_PUBLIC_CRYPTO_PARAMS_H_
#endif // PACKAGER_PUBLIC_CRYPTO_PARAMS_H_

33
include/packager/export.h Normal file
View File

@ -0,0 +1,33 @@
// Copyright 2023 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_PUBLIC_EXPORT_H_
#define PACKAGER_PUBLIC_EXPORT_H_
#if defined(SHARED_LIBRARY_BUILD)
#if defined(_WIN32)
#if defined(SHAKA_IMPLEMENTATION)
#define SHAKA_EXPORT __declspec(dllexport)
#else
#define SHAKA_EXPORT __declspec(dllimport)
#endif // defined(SHAKA_IMPLEMENTATION)
#else // defined(_WIN32)
#if defined(SHAKA_IMPLEMENTATION)
#define SHAKA_EXPORT __attribute__((visibility("default")))
#else
#define SHAKA_EXPORT
#endif
#endif // defined(_WIN32)
#else // defined(SHARED_LIBRARY_BUILD)
#define SHAKA_EXPORT
#endif // defined(SHARED_LIBRARY_BUILD)
#endif // PACKAGER_PUBLIC_EXPORT_H_

View File

@ -1,19 +1,19 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_FILE_FILE_H_
#define PACKAGER_FILE_FILE_H_
#include <stdint.h>
#ifndef PACKAGER_PUBLIC_FILE_H_
#define PACKAGER_PUBLIC_FILE_H_
#include <cstdint>
#include <string>
#include "packager/base/macros.h"
#include "packager/file/public/buffer_callback_params.h"
#include "packager/status.h"
#include <packager/buffer_callback_params.h>
#include <packager/export.h>
#include <packager/macros/classes.h>
#include <packager/status.h>
namespace shaka {
@ -69,6 +69,12 @@ class SHAKA_EXPORT File {
/// @return Number of bytes written, or a value < 0 on error.
virtual int64_t Write(const void* buffer, uint64_t length) = 0;
/// Close the file for writing. This signals that no more data will be
/// written. Future writes are invalid and their behavior is undefined!
/// Data may still be read from the file after calling this method.
/// Some implementations may ignore this if they cannot use the signal.
virtual void CloseForWriting() = 0;
/// @return Size of the file in bytes. A return value less than zero
/// indicates a problem getting the size.
virtual int64_t Size() = 0;
@ -135,14 +141,14 @@ class SHAKA_EXPORT File {
/// @param source The file to copy from.
/// @param destination The file to copy to.
/// @return Number of bytes written, or a value < 0 on error.
static int64_t CopyFile(File* source, File* destination);
static int64_t Copy(File* source, File* destination);
/// Copies the contents from source to destination.
/// @param source The file to copy from.
/// @param destination The file to copy to.
/// @param max_copy The maximum number of bytes to copy; < 0 to copy to EOF.
/// @return Number of bytes written, or a value < 0 on error.
static int64_t CopyFile(File* source, File* destination, int64_t max_copy);
static int64_t Copy(File* source, File* destination, int64_t max_copy);
/// @param file_name is the name of the file to be checked.
/// @return true if `file_name` is a local and regular file.
@ -195,4 +201,4 @@ class SHAKA_EXPORT File {
} // namespace shaka
#endif // PACKAGER_FILE_FILE_H_
#endif // PACKAGER_PUBLIC_FILE_H_

View File

@ -1,11 +1,11 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_HLS_PUBLIC_HLS_PARAMS_H_
#define PACKAGER_HLS_PUBLIC_HLS_PARAMS_H_
#ifndef PACKAGER_PUBLIC_HLS_PARAMS_H_
#define PACKAGER_PUBLIC_HLS_PARAMS_H_
#include <cstdint>
#include <string>
@ -67,4 +67,4 @@ struct HlsParams {
} // namespace shaka
#endif // PACKAGER_HLS_PUBLIC_HLS_PARAMS_H_
#endif // PACKAGER_PUBLIC_HLS_PARAMS_H_

View File

@ -1,11 +1,11 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_MEDIA_PUBLIC_MP4_OUTPUT_PARAMS_H_
#define PACKAGER_MEDIA_PUBLIC_MP4_OUTPUT_PARAMS_H_
#ifndef PACKAGER_PUBLIC_MP4_OUTPUT_PARAMS_H_
#define PACKAGER_PUBLIC_MP4_OUTPUT_PARAMS_H_
namespace shaka {
@ -30,4 +30,4 @@ struct Mp4OutputParams {
} // namespace shaka
#endif // PACKAGER_MEDIA_PUBLIC_MP4_OUTPUT_PARAMS_H_
#endif // PACKAGER_PUBLIC_MP4_OUTPUT_PARAMS_H_

View File

@ -1,11 +1,11 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_MPD_PUBLIC_MPD_PARAMS_H_
#define PACKAGER_MPD_PUBLIC_MPD_PARAMS_H_
#ifndef PACKAGER_PUBLIC_MPD_PARAMS_H_
#define PACKAGER_PUBLIC_MPD_PARAMS_H_
#include <string>
#include <vector>
@ -106,4 +106,4 @@ struct MpdParams {
} // namespace shaka
#endif // PACKAGER_MPD_PUBLIC_MPD_PARAMS_H_
#endif // PACKAGER_PUBLIC_MPD_PARAMS_H_

View File

@ -1,25 +1,26 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_PACKAGER_H_
#define PACKAGER_PACKAGER_H_
#ifndef PACKAGER_PUBLIC_PACKAGER_H_
#define PACKAGER_PUBLIC_PACKAGER_H_
#include <cstdint>
#include <memory>
#include <string>
#include <vector>
#include "packager/file/public/buffer_callback_params.h"
#include "packager/hls/public/hls_params.h"
#include "packager/media/public/ad_cue_generator_params.h"
#include "packager/media/public/chunking_params.h"
#include "packager/media/public/crypto_params.h"
#include "packager/media/public/mp4_output_params.h"
#include "packager/mpd/public/mpd_params.h"
#include "packager/status.h"
#include <packager/ad_cue_generator_params.h>
#include <packager/buffer_callback_params.h>
#include <packager/chunking_params.h>
#include <packager/crypto_params.h>
#include <packager/export.h>
#include <packager/hls_params.h>
#include <packager/mp4_output_params.h>
#include <packager/mpd_params.h>
#include <packager/status.h>
namespace shaka {
@ -152,9 +153,8 @@ class SHAKA_EXPORT Packager {
/// @param packaging_params contains the packaging parameters.
/// @param stream_descriptors a list of stream descriptors.
/// @return OK on success, an appropriate error code on failure.
Status Initialize(
const PackagingParams& packaging_params,
const std::vector<StreamDescriptor>& stream_descriptors);
Status Initialize(const PackagingParams& packaging_params,
const std::vector<StreamDescriptor>& stream_descriptors);
/// Run the pipeline to completion (or failed / been cancelled). Note
/// that it blocks until completion.
@ -202,4 +202,4 @@ class SHAKA_EXPORT Packager {
} // namespace shaka
#endif // PACKAGER_PACKAGER_H_
#endif // PACKAGER_PUBLIC_PACKAGER_H_

View File

@ -1,37 +1,16 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_STATUS_H_
#define PACKAGER_STATUS_H_
#ifndef PACKAGER_PUBLIC_STATUS_H_
#define PACKAGER_PUBLIC_STATUS_H_
#include <iostream>
#include <string>
#if defined(SHARED_LIBRARY_BUILD)
#if defined(_WIN32)
#if defined(SHAKA_IMPLEMENTATION)
#define SHAKA_EXPORT __declspec(dllexport)
#else
#define SHAKA_EXPORT __declspec(dllimport)
#endif // defined(SHAKA_IMPLEMENTATION)
#else // defined(_WIN32)
#if defined(SHAKA_IMPLEMENTATION)
#define SHAKA_EXPORT __attribute__((visibility("default")))
#else
#define SHAKA_EXPORT
#endif
#endif // defined(_WIN32)
#else // defined(SHARED_LIBRARY_BUILD)
#define SHAKA_EXPORT
#endif // defined(SHARED_LIBRARY_BUILD)
#include <packager/export.h>
namespace shaka {
@ -153,8 +132,8 @@ class SHAKA_EXPORT Status {
// generated copy constructor and assignment operator.
};
SHAKA_EXPORT std::ostream& operator<<(std::ostream& os, const Status& x);
std::ostream& operator<<(std::ostream& os, const Status& x);
} // namespace shaka
#endif // PACKAGER_STATUS_H_
#endif // PACKAGER_PUBLIC_STATUS_H_

45
link-test/CMakeLists.txt Normal file
View File

@ -0,0 +1,45 @@
# Copyright 2023 Google LLC. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# If we're building a shared library, make sure it works. We only do this for
# a shared library because the static library won't wrap the third-party
# dependencies like absl.
if(BUILD_SHARED_LIBS)
# Install the library and headers to a temporary location.
set(TEST_INSTALL_DIR ${CMAKE_BINARY_DIR}/test-install)
# Custom commands aren't targets, but have outputs.
add_custom_command(
DEPENDS mpd_generator packager libpackager
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
OUTPUT ${TEST_INSTALL_DIR}
COMMAND
${CMAKE_COMMAND} --install . --prefix ${TEST_INSTALL_DIR} --config "$<CONFIG>")
# Custom targets with commands run every time, no matter what. A custom
# target with no command, but which depends on a custom command's output,
# gets us something that acts like a real target and doesn't re-run every
# time.
add_custom_target(test-install ALL DEPENDS ${TEST_INSTALL_DIR})
# Then try to build a very simplistic test app to prove that we can include
# the headers and link the library.
add_executable(packager_link_test test.cc)
# Both of these are needed. The first is a basic dependency to make sure
# test-install runs first, whereas the second treats test.cc as dirty if
# test-install runs again.
add_dependencies(packager_link_test test-install)
set_source_files_properties(test.cc PROPERTIES OBJECT_DEPENDS ${TEST_INSTALL_DIR})
target_link_directories(packager_link_test PRIVATE ${TEST_INSTALL_DIR}/lib)
target_include_directories(packager_link_test PRIVATE ${TEST_INSTALL_DIR}/include)
if(NOT MSVC)
target_link_libraries(packager_link_test -lpackager)
else()
target_link_libraries(packager_link_test ${TEST_INSTALL_DIR}/lib/libpackager.lib)
endif()
endif()

5
link-test/README.md Normal file
View File

@ -0,0 +1,5 @@
# Link test for libpackager
This is a dummy application to test linking libpackager. It gives us a build
target that validates our install target works and that our public headers (in
`../include/packager/...`) are complete and self-contained.

35
link-test/test.cc Normal file
View File

@ -0,0 +1,35 @@
// Copyright 2023 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
// This is a simple app to test linking against a shared libpackager on all
// platforms. It's not meant to do anything useful at all.
#include <cstdio>
#include <vector>
#include <packager/packager.h>
int main(int argc, char** argv) {
// Unused. Silence warnings.
(void)argc;
(void)argv;
// Print the packager version.
std::cout << "Packager v" + shaka::Packager::GetLibraryVersion() + "\n";
// Don't bother filling these out. Just make sure it links.
shaka::PackagingParams packaging_params;
std::vector<shaka::StreamDescriptor> stream_descriptors;
// This will fail.
shaka::Packager packager;
shaka::Status status =
packager.Initialize(packaging_params, stream_descriptors);
// Just print the status to make sure we can do that in a custom app.
std::cout << status.ToString() + "\n";
return 0;
}

View File

@ -4,15 +4,31 @@
var path = require('path');
var spawnSync = require('child_process').spawnSync;
// Command names per-platform:
// Command names per-platform (process.platform) and per-architecture
// (process.arch):
var commandNames = {
linux: 'packager-linux',
darwin: 'packager-osx',
win32: 'packager-win.exe',
linux: {
'x64': 'packager-linux-x64',
'arm64': 'packager-linux-arm64',
},
darwin: {
'x64': 'packager-osx-x64',
},
win32: {
'x64': 'packager-win-x64.exe',
},
};
// Find the platform-specific binary:
var binaryPath = path.resolve(__dirname, 'bin', commandNames[process.platform]);
if (!(process.platform in commandNames)) {
throw new Error('Platform not supported: ' + process.platform);
}
if (!(process.arch in commandNames[process.platform])) {
throw new Error(
'Architecture not supported: ' + process.platform + '/' + process.arch);
}
var commandName = commandNames[process.platform][process.arch];
var binaryPath = path.resolve(__dirname, 'bin', commandName);
// Find the args to pass to that binary:
// argv[0] is node itself, and argv[1] is the script.

View File

@ -2,6 +2,7 @@
"name": "",
"description": "A media packaging tool and SDK.",
"version": "",
"private": false,
"homepage": "https://github.com/shaka-project/shaka-packager",
"author": "Google",
"maintainers": [

View File

@ -5,11 +5,19 @@ var fs = require('fs');
var path = require('path');
var spawnSync = require('child_process').spawnSync;
// Command names per-platform:
// Command names per-platform (process.platform) and per-architecture
// (process.arch):
var commandNames = {
linux: 'packager-linux',
darwin: 'packager-osx',
win32: 'packager-win.exe',
linux: {
'x64': 'packager-linux-x64',
'arm64': 'packager-linux-arm64',
},
darwin: {
'x64': 'packager-osx-x64',
},
win32: {
'x64': 'packager-win-x64.exe',
},
};
// Get the current package version:
@ -44,12 +52,23 @@ fs.readdirSync(binFolderPath).forEach(function(childName) {
});
for (var platform in commandNames) {
// Find the destination for this binary:
var command = commandNames[platform];
var binaryPath = path.resolve(binFolderPath, command);
for (var arch in commandNames[platform]) {
// Find the destination for this binary:
var command = commandNames[platform][arch];
var binaryPath = path.resolve(binFolderPath, command);
download(urlBase + command, binaryPath);
fs.chmodSync(binaryPath, 0755);
try {
download(urlBase + command, binaryPath);
fs.chmodSync(binaryPath, 0755);
} catch (error) {
if (arch == 'arm64') {
// Optional. Forks may not have arm64 builds available. Ignore.
} else {
// Required. Re-throw and fail.
throw error;
}
}
}
}
// Fetch LICENSE and README files from the same tag, and include them in the
@ -83,6 +102,6 @@ function download(url, outputPath) {
console.log('Downloading', url, 'to', outputPath);
var returnValue = spawnSync('curl', args, options);
if (returnValue.status != 0) {
process.exit(returnValue.status);
throw new Error('Download of ' + url + ' failed: ' + returnValue.status);
}
}

266
packager/CMakeLists.txt Normal file
View File

@ -0,0 +1,266 @@
# Copyright 2022 Google LLC. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# Packager CMake build file.
# Include a module to define standard install directories.
include(GNUInstallDirs)
# Build static libs by default, or shared if BUILD_SHARED_LIBS is on.
if(BUILD_SHARED_LIBS)
add_definitions(-DSHARED_LIBRARY_BUILD)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
endif()
# Global C++ flags.
if(MSVC)
# Warning level 4 and all warnings as errors.
add_compile_options(/W4 /WX)
# Silence a warning from an absl header about alignment in boolean flags.
add_compile_options(/wd4324)
# Silence a warning about STL types in exported classes.
add_compile_options(/wd4251)
# Silence a warning about constant conditional expressions.
add_compile_options(/wd4127)
# We use long-jumps in subtitle_composer.cc due to API of libpng
add_compile_options(/wd4611)
# need /bigobj for box definitions
add_compile_options(/bigobj)
# Packager's macro for Windows-specific code.
add_definitions(-DOS_WIN)
# Suppress Microsoft's min() and max() macros, which will conflict with
# things like std::numeric_limits::max() and std::min().
add_definitions(-DNOMINMAX)
# Define this so that we can use fopen() without warnings.
add_definitions(-D_CRT_SECURE_NO_WARNINGS)
# Don't automatically include winsock.h in windows.h. This is needed for us
# to use winsock2.h, which contains definitions that conflict with the
# ancient winsock 1.1 interface in winsock.h.
add_definitions(-DWIN32_LEAN_AND_MEAN)
else()
# Lots of warnings and all warnings as errors.
# Note that we can't use -Wpedantic due to absl's int128 headers.
add_compile_options(-Wall -Wextra -Werror)
# Several warning suppression flags are required on one compiler version and
# not understood by another. Do not treat these as errors.
add_compile_options(-Wno-unknown-warning-option)
endif()
# Global include paths.
# Project root, to reference internal headers as packager/foo/bar/...
include_directories(..)
# Public include folder, to reference public headers as packager/foo.h
include_directories(../include)
# Include our module for gtest-based testing.
include("gtest.cmake")
# Include our module for building protos.
include("protobuf.cmake")
# Subdirectories with their own CMakeLists.txt, all of whose targets are built.
add_subdirectory(file)
add_subdirectory(kv_pairs)
add_subdirectory(media)
add_subdirectory(hls)
add_subdirectory(mpd)
add_subdirectory(status)
add_subdirectory(third_party)
add_subdirectory(tools)
add_subdirectory(utils)
add_subdirectory(version)
set(libpackager_sources
app/job_manager.cc
app/job_manager.h
app/muxer_factory.cc
app/muxer_factory.h
app/packager_util.cc
app/packager_util.h
app/single_thread_job_manager.cc
app/single_thread_job_manager.h
packager.cc
../include/packager/packager.h
)
set(libpackager_deps
file
hls_builder
media_chunking
media_codecs
media_crypto
demuxer
media_event
dvb
mp2t
mp4
packed_audio
ttml
formats_webm
wvm
media_replicator
media_trick_play
mpd_builder
mbedtls
string_utils
version
)
# A static library target is always built.
add_library(libpackager_static STATIC ${libpackager_sources})
target_link_libraries(libpackager_static ${libpackager_deps})
# And always installed as libpackager.a / libpackager.lib:
if(NOT MSVC)
set_property(TARGET libpackager_static PROPERTY OUTPUT_NAME packager)
else()
set_property(TARGET libpackager_static PROPERTY OUTPUT_NAME libpackager)
endif()
# A shared library target is conditional (default OFF):
if(BUILD_SHARED_LIBS)
add_library(libpackager_shared SHARED ${libpackager_sources})
target_link_libraries(libpackager_shared ${libpackager_deps})
target_compile_definitions(libpackager_shared PUBLIC SHAKA_IMPLEMENTATION)
# And always installed as libpackager.so / libpackager.dll:
if(NOT MSVC)
set_property(TARGET libpackager_shared PROPERTY OUTPUT_NAME packager)
else()
set_property(TARGET libpackager_shared PROPERTY OUTPUT_NAME libpackager)
endif()
# If we're building a shared library, this is what the "libpackager" target
# aliases to.
add_library(libpackager ALIAS libpackager_shared)
else()
# If we're not building a shared library, the "libpackager" target aliases to
# the static library.
add_library(libpackager ALIAS libpackager_static)
endif()
add_executable(packager
app/ad_cue_generator_flags.cc
app/ad_cue_generator_flags.h
app/crypto_flags.cc
app/crypto_flags.h
app/hls_flags.cc
app/hls_flags.h
app/manifest_flags.cc
app/manifest_flags.h
app/mpd_flags.cc
app/mpd_flags.h
app/muxer_flags.cc
app/muxer_flags.h
app/packager_main.cc
app/playready_key_encryption_flags.cc
app/playready_key_encryption_flags.h
app/raw_key_encryption_flags.cc
app/raw_key_encryption_flags.h
app/protection_system_flags.cc
app/protection_system_flags.h
app/retired_flags.cc
app/retired_flags.h
app/stream_descriptor.cc
app/stream_descriptor.h
app/validate_flag.cc
app/validate_flag.h
app/vlog_flags.cc
app/vlog_flags.h
app/widevine_encryption_flags.cc
app/widevine_encryption_flags.h
)
target_link_libraries(packager
absl::flags
absl::flags_parse
absl::log
absl::log_flags
absl::strings
hex_bytes_flags
libpackager
license_notice
string_utils
)
add_executable(mpd_generator
app/mpd_generator.cc
app/mpd_generator_flags.h
app/vlog_flags.cc
app/vlog_flags.h
)
target_link_libraries(mpd_generator
absl::flags
absl::flags_parse
absl::log
absl::log_flags
absl::strings
license_notice
mpd_builder
mpd_util
)
add_executable(packager_test
packager_test.cc
)
target_link_libraries(packager_test
libpackager
gmock
gtest
gtest_main)
list(APPEND packager_test_py_sources
"${CMAKE_CURRENT_SOURCE_DIR}/app/test/packager_app.py"
"${CMAKE_CURRENT_SOURCE_DIR}/app/test/packager_test.py"
"${CMAKE_CURRENT_SOURCE_DIR}/app/test/test_env.py")
list(APPEND packager_test_py_output
"${CMAKE_CURRENT_BINARY_DIR}/packager_app.py"
"${CMAKE_CURRENT_BINARY_DIR}/packager_test.py"
"${CMAKE_CURRENT_BINARY_DIR}/test_env.py")
add_custom_command(
OUTPUT ${packager_test_py_output}
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/app/test/packager_app.py ${CMAKE_CURRENT_BINARY_DIR}/packager_app.py
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/app/test/packager_test.py ${CMAKE_CURRENT_BINARY_DIR}/packager_test.py
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/app/test/test_env.py ${CMAKE_CURRENT_BINARY_DIR}/test_env.py
DEPENDS ${packager_test_py_sources}
)
add_custom_target(packager_test_py_copy ALL
DEPENDS ${packager_test_py_output} packager
SOURCES ${packager_test_py_sources}
)
if(NOT SKIP_INTEGRATION_TESTS)
add_test (NAME packager_test_py
COMMAND ${PYTHON_EXECUTABLE} packager_test.py
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
endif()
configure_file(packager.pc.in packager.pc @ONLY)
# Always install the binaries.
install(TARGETS mpd_generator packager)
# Always install the python tools.
install(PROGRAMS ${CMAKE_CURRENT_BINARY_DIR}/pssh-box.py
DESTINATION ${CMAKE_INSTALL_BINDIR})
install(DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/pssh-box-protos
DESTINATION ${CMAKE_INSTALL_BINDIR})
# With shared libraries, also install the library, headers, and pkgconfig.
# The static library isn't usable as a standalone because it doesn't include
# its static dependencies (zlib, absl, etc).
if(BUILD_SHARED_LIBS)
install(TARGETS libpackager_shared)
install(DIRECTORY ../include/packager
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/packager.pc
DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig)
endif()

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,14 +6,15 @@
//
// Defines cuepoint generator flags.
#include "packager/app/ad_cue_generator_flags.h"
#include <packager/app/ad_cue_generator_flags.h>
DEFINE_string(ad_cues,
"",
"List of cuepoint markers."
"This flag accepts semicolon separated pairs and components in "
"the pair are separated by a comma and the second component "
"duration is optional. For example --ad_cues "
"{start_time}[,{duration}][;{start_time}[,{duration}]]..."
"The start_time represents the start of the cue marker in "
"seconds relative to the start of the program.");
ABSL_FLAG(std::string,
ad_cues,
"",
"List of cuepoint markers."
"This flag accepts semicolon separated pairs and components in "
"the pair are separated by a comma and the second component "
"duration is optional. For example --ad_cues "
"{start_time}[,{duration}][;{start_time}[,{duration}]]..."
"The start_time represents the start of the cue marker in "
"seconds relative to the start of the program.");

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -7,8 +7,9 @@
#ifndef PACKAGER_APP_AD_CUE_GENERATOR_FLAGS_H_
#define PACKAGER_APP_AD_CUE_GENERATOR_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_string(ad_cues);
ABSL_DECLARE_FLAG(std::string, ad_cues);
#endif // PACKAGER_APP_AD_CUE_GENERATOR_FLAGS_H_

View File

@ -1,18 +1,22 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/crypto_flags.h"
#include <packager/app/crypto_flags.h>
#include <stdio.h>
#include <cstdio>
DEFINE_string(protection_scheme,
"cenc",
"Specify a protection scheme, 'cenc' or 'cbc1' or pattern-based "
"protection schemes 'cens' or 'cbcs'.");
DEFINE_int32(
#include <absl/flags/flag.h>
ABSL_FLAG(std::string,
protection_scheme,
"cenc",
"Specify a protection scheme, 'cenc' or 'cbc1' or pattern-based "
"protection schemes 'cens' or 'cbcs'.");
ABSL_FLAG(
int32_t,
crypt_byte_block,
1,
"Specify the count of the encrypted blocks in the protection pattern, "
@ -20,16 +24,21 @@ DEFINE_int32(
"patterns (crypt_byte_block:skip_byte_block): 1:9 (default), 5:5, 10:0. "
"Apply to video streams with 'cbcs' and 'cens' protection schemes only; "
"ignored otherwise.");
DEFINE_int32(
ABSL_FLAG(
int32_t,
skip_byte_block,
9,
"Specify the count of the unencrypted blocks in the protection pattern. "
"Apply to video streams with 'cbcs' and 'cens' protection schemes only; "
"ignored otherwise.");
DEFINE_bool(vp9_subsample_encryption, true, "Enable VP9 subsample encryption.");
DEFINE_string(playready_extra_header_data,
"",
"Extra XML data to add to PlayReady headers.");
ABSL_FLAG(bool,
vp9_subsample_encryption,
true,
"Enable VP9 subsample encryption.");
ABSL_FLAG(std::string,
playready_extra_header_data,
"",
"Extra XML data to add to PlayReady headers.");
bool ValueNotGreaterThanTen(const char* flagname, int32_t value) {
if (value > 10) {
@ -54,6 +63,26 @@ bool ValueIsXml(const char* flagname, const std::string& value) {
return true;
}
DEFINE_validator(crypt_byte_block, &ValueNotGreaterThanTen);
DEFINE_validator(skip_byte_block, &ValueNotGreaterThanTen);
DEFINE_validator(playready_extra_header_data, &ValueIsXml);
namespace shaka {
bool ValidateCryptoFlags() {
bool success = true;
auto crypt_byte_block = absl::GetFlag(FLAGS_crypt_byte_block);
if (!ValueNotGreaterThanTen("crypt_byte_block", crypt_byte_block)) {
success = false;
}
auto skip_byte_block = absl::GetFlag(FLAGS_skip_byte_block);
if (!ValueNotGreaterThanTen("skip_byte_block", skip_byte_block)) {
success = false;
}
auto playready_extra_header_data =
absl::GetFlag(FLAGS_playready_extra_header_data);
if (!ValueIsXml("playready_extra_header_data", playready_extra_header_data)) {
success = false;
}
return success;
}
} // namespace shaka

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -10,12 +10,17 @@
#ifndef PACKAGER_APP_CRYPTO_FLAGS_H_
#define PACKAGER_APP_CRYPTO_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_string(protection_scheme);
DECLARE_int32(crypt_byte_block);
DECLARE_int32(skip_byte_block);
DECLARE_bool(vp9_subsample_encryption);
DECLARE_string(playready_extra_header_data);
ABSL_DECLARE_FLAG(std::string, protection_scheme);
ABSL_DECLARE_FLAG(int32_t, crypt_byte_block);
ABSL_DECLARE_FLAG(int32_t, skip_byte_block);
ABSL_DECLARE_FLAG(bool, vp9_subsample_encryption);
ABSL_DECLARE_FLAG(std::string, playready_extra_header_data);
namespace shaka {
bool ValidateCryptoFlags();
}
#endif // PACKAGER_APP_CRYPTO_FLAGS_H_

View File

@ -1,25 +0,0 @@
// Copyright 2017 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/gflags_hex_bytes.h"
#include "packager/base/strings/string_number_conversions.h"
namespace shaka {
bool ValidateHexString(const char* flagname,
const std::string& value,
std::vector<uint8_t>* value_bytes) {
std::vector<uint8_t> temp_value_bytes;
if (!value.empty() && !base::HexStringToBytes(value, &temp_value_bytes)) {
printf("Invalid hex string for --%s: %s\n", flagname, value.c_str());
return false;
}
value_bytes->swap(temp_value_bytes);
return true;
}
} // namespace shaka

View File

@ -1,50 +0,0 @@
// Copyright 2017 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
//
// Extends gflags to support hex formatted bytes.
#ifndef PACKAGER_APP_GFLAGS_HEX_BYTES_H_
#define PACKAGER_APP_GFLAGS_HEX_BYTES_H_
#include <gflags/gflags.h>
#include <string>
#include <vector>
namespace shaka {
bool ValidateHexString(const char* flagname,
const std::string& value,
std::vector<uint8_t>* value_bytes);
} // namespace shaka
// The raw bytes will be available in FLAGS_##name##_bytes.
// The original gflag variable FLAGS_##name is defined in shaka_gflags_extension
// and not exposed directly.
#define DECLARE_hex_bytes(name) \
namespace shaka_gflags_extension { \
DECLARE_string(name); \
} \
namespace shaka_gflags_extension { \
extern std::vector<uint8_t> FLAGS_##name##_bytes; \
} \
using shaka_gflags_extension::FLAGS_##name##_bytes
#define DEFINE_hex_bytes(name, val, txt) \
namespace shaka_gflags_extension { \
DEFINE_string(name, val, txt); \
} \
namespace shaka_gflags_extension { \
std::vector<uint8_t> FLAGS_##name##_bytes; \
static bool hex_validator_##name = gflags::RegisterFlagValidator( \
&FLAGS_##name, \
[](const char* flagname, const std::string& value) { \
return shaka::ValidateHexString(flagname, value, \
&FLAGS_##name##_bytes); \
}); \
} \
using shaka_gflags_extension::FLAGS_##name##_bytes
#endif // PACKAGER_APP_GFLAGS_HEX_BYTES_H_

View File

@ -1,32 +1,37 @@
// Copyright 2016 Google Inc. All rights reserved.
// Copyright 2016 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/hls_flags.h"
#include <packager/app/hls_flags.h>
DEFINE_string(hls_master_playlist_output,
"",
"Output path for the master playlist for HLS. This flag must be"
"used to output HLS.");
DEFINE_string(hls_base_url,
"",
"The base URL for the Media Playlists and media files listed in "
"the playlists. This is the prefix for the files.");
DEFINE_string(hls_key_uri,
"",
"The key uri for 'identity' and 'com.apple.streamingkeydelivery' "
"key formats. Ignored if the playlist is not encrypted or not "
"using the above key formats.");
DEFINE_string(hls_playlist_type,
"VOD",
"VOD, EVENT, or LIVE. This defines the EXT-X-PLAYLIST-TYPE in "
"the HLS specification. For hls_playlist_type of LIVE, "
"EXT-X-PLAYLIST-TYPE tag is omitted.");
DEFINE_int32(hls_media_sequence_number,
0,
"Number. This HLS-only parameter defines the initial "
"EXT-X-MEDIA-SEQUENCE value, which allows continuous media "
"sequence across packager restarts. See #691 for more "
"information about the reasoning of this and its use cases.");
ABSL_FLAG(std::string,
hls_master_playlist_output,
"",
"Output path for the master playlist for HLS. This flag must be"
"used to output HLS.");
ABSL_FLAG(std::string,
hls_base_url,
"",
"The base URL for the Media Playlists and media files listed in "
"the playlists. This is the prefix for the files.");
ABSL_FLAG(std::string,
hls_key_uri,
"",
"The key uri for 'identity' and 'com.apple.streamingkeydelivery' "
"key formats. Ignored if the playlist is not encrypted or not "
"using the above key formats.");
ABSL_FLAG(std::string,
hls_playlist_type,
"VOD",
"VOD, EVENT, or LIVE. This defines the EXT-X-PLAYLIST-TYPE in "
"the HLS specification. For hls_playlist_type of LIVE, "
"EXT-X-PLAYLIST-TYPE tag is omitted.");
ABSL_FLAG(int32_t,
hls_media_sequence_number,
0,
"Number. This HLS-only parameter defines the initial "
"EXT-X-MEDIA-SEQUENCE value, which allows continuous media "
"sequence across packager restarts. See #691 for more "
"information about the reasoning of this and its use cases.");

View File

@ -1,4 +1,4 @@
// Copyright 2016 Google Inc. All rights reserved.
// Copyright 2016 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -7,12 +7,13 @@
#ifndef PACKAGER_APP_HLS_FLAGS_H_
#define PACKAGER_APP_HLS_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_string(hls_master_playlist_output);
DECLARE_string(hls_base_url);
DECLARE_string(hls_key_uri);
DECLARE_string(hls_playlist_type);
DECLARE_int32(hls_media_sequence_number);
ABSL_DECLARE_FLAG(std::string, hls_master_playlist_output);
ABSL_DECLARE_FLAG(std::string, hls_base_url);
ABSL_DECLARE_FLAG(std::string, hls_key_uri);
ABSL_DECLARE_FLAG(std::string, hls_playlist_type);
ABSL_DECLARE_FLAG(int32_t, hls_media_sequence_number);
#endif // PACKAGER_APP_HLS_FLAGS_H_

View File

@ -1,33 +1,58 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/job_manager.h"
#include <packager/app/job_manager.h>
#include "packager/app/libcrypto_threading.h"
#include "packager/media/chunking/sync_point_queue.h"
#include "packager/media/origin/origin_handler.h"
#include <set>
#include <absl/log/check.h>
#include <packager/media/chunking/sync_point_queue.h>
#include <packager/media/origin/origin_handler.h>
namespace shaka {
namespace media {
Job::Job(const std::string& name, std::shared_ptr<OriginHandler> work)
: SimpleThread(name),
Job::Job(const std::string& name,
std::shared_ptr<OriginHandler> work,
OnCompleteFunction on_complete)
: name_(name),
work_(std::move(work)),
wait_(base::WaitableEvent::ResetPolicy::MANUAL,
base::WaitableEvent::InitialState::NOT_SIGNALED) {
on_complete_(on_complete),
status_(error::Code::UNKNOWN, "Job uninitialized") {
DCHECK(work_);
}
const Status& Job::Initialize() {
status_ = work_->Initialize();
return status_;
}
void Job::Start() {
thread_.reset(new std::thread(&Job::Run, this));
}
void Job::Cancel() {
work_->Cancel();
}
void Job::Run() {
status_ = work_->Run();
wait_.Signal();
const Status& Job::Run() {
if (status_.ok()) // initialized correctly
status_ = work_->Run();
on_complete_(this);
return status_;
}
void Job::Join() {
if (thread_) {
thread_->join();
thread_ = nullptr;
}
}
JobManager::JobManager(std::unique_ptr<SyncPointQueue> sync_points)
@ -35,81 +60,77 @@ JobManager::JobManager(std::unique_ptr<SyncPointQueue> sync_points)
void JobManager::Add(const std::string& name,
std::shared_ptr<OriginHandler> handler) {
// Stores Job entries for delayed construction of Job objects, to avoid
// setting up SimpleThread until we know all workers can be initialized
// successfully.
job_entries_.push_back({name, std::move(handler)});
jobs_.emplace_back(new Job(
name, std::move(handler),
std::bind(&JobManager::OnJobComplete, this, std::placeholders::_1)));
}
Status JobManager::InitializeJobs() {
Status status;
for (const JobEntry& job_entry : job_entries_)
status.Update(job_entry.worker->Initialize());
if (!status.ok())
return status;
// Create Job objects after successfully initialized all workers.
for (const JobEntry& job_entry : job_entries_)
jobs_.emplace_back(new Job(job_entry.name, std::move(job_entry.worker)));
for (auto& job : jobs_)
status.Update(job->Initialize());
return status;
}
Status JobManager::RunJobs() {
// We need to store the jobs and the waits separately in order to use the
// |WaitMany| function. |WaitMany| takes an array of WaitableEvents but we
// need to access the jobs in order to join the thread and check the status.
// The indexes needs to be check in sync or else we won't be able to relate a
// WaitableEvent back to the job.
std::vector<Job*> active_jobs;
std::vector<base::WaitableEvent*> active_waits;
std::set<Job*> active_jobs;
// Start every job and add it to the active jobs list so that we can wait
// on each one.
for (auto& job : jobs_) {
job->Start();
active_jobs.push_back(job.get());
active_waits.push_back(job->wait());
active_jobs.insert(job.get());
}
// Wait for all jobs to complete or an error occurs.
// Wait for all jobs to complete or any job to error.
Status status;
while (status.ok() && active_jobs.size()) {
// Wait for an event to finish and then update our status so that we can
// quit if something has gone wrong.
const size_t done =
base::WaitableEvent::WaitMany(active_waits.data(), active_waits.size());
Job* job = active_jobs[done];
{
absl::MutexLock lock(&mutex_);
while (status.ok() && active_jobs.size()) {
// any_job_complete_ is protected by mutex_.
any_job_complete_.Wait(&mutex_);
job->Join();
status.Update(job->status());
// Remove the job and the wait from our tracking.
active_jobs.erase(active_jobs.begin() + done);
active_waits.erase(active_waits.begin() + done);
// complete_ is protected by mutex_.
for (const auto& entry : complete_) {
Job* job = entry.first;
bool complete = entry.second;
if (complete) {
job->Join();
status.Update(job->status());
active_jobs.erase(job);
}
}
}
}
// If the main loop has exited and there are still jobs running,
// we need to cancel them and clean-up.
if (sync_points_)
sync_points_->Cancel();
for (auto& job : active_jobs) {
job->Cancel();
}
for (auto& job : active_jobs) {
for (auto& job : active_jobs)
job->Cancel();
for (auto& job : active_jobs)
job->Join();
}
return status;
}
void JobManager::OnJobComplete(Job* job) {
absl::MutexLock lock(&mutex_);
// These are both protected by mutex_.
complete_[job] = true;
any_job_complete_.Signal();
}
void JobManager::CancelJobs() {
if (sync_points_)
sync_points_->Cancel();
for (auto& job : jobs_) {
for (auto& job : jobs_)
job->Cancel();
}
}
} // namespace media

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -7,11 +7,15 @@
#ifndef PACKAGER_APP_JOB_MANAGER_H_
#define PACKAGER_APP_JOB_MANAGER_H_
#include <functional>
#include <map>
#include <memory>
#include <thread>
#include <vector>
#include "packager/base/threading/simple_thread.h"
#include "packager/status.h"
#include <absl/synchronization/mutex.h>
#include <packager/status.h>
namespace shaka {
namespace media {
@ -21,33 +25,53 @@ class SyncPointQueue;
// A job is a single line of work that is expected to run in parallel with
// other jobs.
class Job : public base::SimpleThread {
class Job {
public:
Job(const std::string& name, std::shared_ptr<OriginHandler> work);
typedef std::function<void(Job*)> OnCompleteFunction;
// Request that the job stops executing. This is only a request and
// will not block. If you want to wait for the job to complete, use
// |wait|.
Job(const std::string& name,
std::shared_ptr<OriginHandler> work,
OnCompleteFunction on_complete);
// Initialize the work object. Call before Start() or Run(). Updates status()
// and returns it for convenience.
const Status& Initialize();
// Begin the job in a new thread. This is only a request and will not block.
// If you want to wait for the job to complete, use |complete|.
// Use either Start() for threaded operation or Run() for non-threaded
// operation. DO NOT USE BOTH!
void Start();
// Run the job's work synchronously, blocking until complete. Updates status()
// and returns it for convenience.
// Use either Start() for threaded operation or Run() for non-threaded
// operation. DO NOT USE BOTH!
const Status& Run();
// Request that the job stops executing. This is only a request and will not
// block. If you want to wait for the job to complete, use |complete|.
void Cancel();
// Get the current status of the job. If the job failed to initialize
// or encountered an error during execution this will return the error.
// Join the thread, if any was started. Blocks until the thread has stopped.
void Join();
// Get the current status of the job. If the job failed to initialize or
// encountered an error during execution this will return the error.
const Status& status() const { return status_; }
// If you want to wait for this job to complete, this will return the
// WaitableEvent you can wait on.
base::WaitableEvent* wait() { return &wait_; }
// The name given to this job in the constructor.
const std::string& name() const { return name_; }
private:
Job(const Job&) = delete;
Job& operator=(const Job&) = delete;
void Run() override;
std::string name_;
std::shared_ptr<OriginHandler> work_;
OnCompleteFunction on_complete_;
std::unique_ptr<std::thread> thread_;
Status status_;
base::WaitableEvent wait_;
};
// Similar to a thread pool, JobManager manages multiple jobs that are expected
@ -70,7 +94,7 @@ class JobManager {
// Initialize all registered jobs. If any job fails to initialize, this will
// return the error and it will not be safe to call |RunJobs| as not all jobs
// will be properly initialized.
virtual Status InitializeJobs();
Status InitializeJobs();
// Run all registered jobs. Before calling this make sure that
// |InitializedJobs| returned |Status::OK|. This call is blocking and will
@ -87,16 +111,17 @@ class JobManager {
JobManager(const JobManager&) = delete;
JobManager& operator=(const JobManager&) = delete;
struct JobEntry {
std::string name;
std::shared_ptr<OriginHandler> worker;
};
// Stores Job entries for delayed construction of Job object.
std::vector<JobEntry> job_entries_;
std::vector<std::unique_ptr<Job>> jobs_;
void OnJobComplete(Job* job);
// Stored in JobManager so JobManager can cancel |sync_points| when any job
// fails or is cancelled.
std::unique_ptr<SyncPointQueue> sync_points_;
std::vector<std::unique_ptr<Job>> jobs_;
absl::Mutex mutex_;
std::map<Job*, bool> complete_ ABSL_GUARDED_BY(mutex_);
absl::CondVar any_job_complete_ ABSL_GUARDED_BY(mutex_);
};
} // namespace media

View File

@ -1,52 +0,0 @@
// Copyright 2014 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/libcrypto_threading.h"
#include <openssl/thread.h>
#include <memory>
#include "packager/base/logging.h"
#include "packager/base/synchronization/lock.h"
#include "packager/base/threading/platform_thread.h"
namespace shaka {
namespace media {
namespace {
std::unique_ptr<base::Lock[]> global_locks;
void LockFunction(int mode, int n, const char* file, int line) {
VLOG(2) << "CryptoLock @ " << file << ":" << line;
if (mode & CRYPTO_LOCK)
global_locks[n].Acquire();
else
global_locks[n].Release();
}
void ThreadIdFunction(CRYPTO_THREADID* id) {
CRYPTO_THREADID_set_numeric(
id, static_cast<unsigned long>(base::PlatformThread::CurrentId()));
}
} // namespace
LibcryptoThreading::LibcryptoThreading() {
global_locks.reset(new base::Lock[CRYPTO_num_locks()]);
CRYPTO_THREADID_set_callback(ThreadIdFunction);
CRYPTO_set_locking_callback(LockFunction);
}
LibcryptoThreading::~LibcryptoThreading() {
CRYPTO_THREADID_set_callback(NULL);
CRYPTO_set_locking_callback(NULL);
global_locks.reset();
}
} // namespace media
} // namespace shaka

View File

@ -1,28 +0,0 @@
// Copyright 2014 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef APP_LIBCRYPTO_THREADING_H_
#define APP_LIBCRYPTO_THREADING_H_
#include "packager/base/macros.h"
namespace shaka {
namespace media {
/// Convenience class which initializes and terminates libcrypto threading.
class LibcryptoThreading {
public:
LibcryptoThreading();
~LibcryptoThreading();
private:
DISALLOW_COPY_AND_ASSIGN(LibcryptoThreading);
};
} // namespace media
} // namespace shaka
#endif // APP_LIBCRYPTO_THREADING_H_

View File

@ -1,16 +1,18 @@
// Copyright 2018 Google Inc. All rights reserved.
// Copyright 2018 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/manifest_flags.h"
#include <packager/app/manifest_flags.h>
DEFINE_double(time_shift_buffer_depth,
1800.0,
"Guaranteed duration of the time shifting buffer for HLS LIVE "
"playlists and DASH dynamic media presentations, in seconds.");
DEFINE_uint64(
ABSL_FLAG(double,
time_shift_buffer_depth,
1800.0,
"Guaranteed duration of the time shifting buffer for HLS LIVE "
"playlists and DASH dynamic media presentations, in seconds.");
ABSL_FLAG(
uint64_t,
preserved_segments_outside_live_window,
50,
"Segments outside the live window (defined by '--time_shift_buffer_depth') "
@ -19,17 +21,19 @@ DEFINE_uint64(
"stages of content serving pipeline, so that the segments stay accessible "
"as they may still be accessed by the player."
"The segments are not removed if the value is zero.");
DEFINE_string(default_language,
"",
"For DASH, any audio/text tracks tagged with this language will "
"have <Role ... value=\"main\" /> in the manifest; For HLS, the "
"first audio/text rendition in a group tagged with this language "
"will have 'DEFAULT' attribute set to 'YES'. This allows the "
"player to choose the correct default language for the content."
"This applies to both audio and text tracks. The default "
"language for text tracks can be overriden by "
"'--default_text_language'.");
DEFINE_string(default_text_language,
"",
"Same as above, but this applies to text tracks only, and "
"overrides the default language for text tracks.");
ABSL_FLAG(std::string,
default_language,
"",
"For DASH, any audio/text tracks tagged with this language will "
"have <Role ... value=\"main\" /> in the manifest; For HLS, the "
"first audio/text rendition in a group tagged with this language "
"will have 'DEFAULT' attribute set to 'YES'. This allows the "
"player to choose the correct default language for the content."
"This applies to both audio and text tracks. The default "
"language for text tracks can be overriden by "
"'--default_text_language'.");
ABSL_FLAG(std::string,
default_text_language,
"",
"Same as above, but this applies to text tracks only, and "
"overrides the default language for text tracks.");

View File

@ -1,4 +1,4 @@
// Copyright 2018 Google Inc. All rights reserved.
// Copyright 2018 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -9,11 +9,12 @@
#ifndef PACKAGER_APP_MANIFEST_FLAGS_H_
#define PACKAGER_APP_MANIFEST_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_double(time_shift_buffer_depth);
DECLARE_uint64(preserved_segments_outside_live_window);
DECLARE_string(default_language);
DECLARE_string(default_text_language);
ABSL_DECLARE_FLAG(double, time_shift_buffer_depth);
ABSL_DECLARE_FLAG(uint64_t, preserved_segments_outside_live_window);
ABSL_DECLARE_FLAG(std::string, default_language);
ABSL_DECLARE_FLAG(std::string, default_text_language);
#endif // PACKAGER_APP_MANIFEST_FLAGS_H_

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,48 +6,57 @@
//
// Defines Mpd flags.
#include "packager/app/mpd_flags.h"
#include <packager/app/mpd_flags.h>
DEFINE_bool(generate_static_live_mpd,
false,
"Set to true to generate static mpd. If segment_template is "
"specified in stream descriptors, shaka-packager generates dynamic "
"mpd by default; if this flag is enabled, shaka-packager generates "
"static mpd instead. Note that if segment_template is not "
"specified, shaka-packager always generates static mpd regardless "
"of the value of this flag.");
DEFINE_bool(output_media_info,
false,
"Create a human readable format of MediaInfo. The output file name "
"will be the name specified by output flag, suffixed with "
"'.media_info'.");
DEFINE_string(mpd_output, "", "MPD output file name.");
DEFINE_string(base_urls,
"",
"Comma separated BaseURLs for the MPD. The values will be added "
"as <BaseURL> element(s) immediately under the <MPD> element.");
DEFINE_double(min_buffer_time,
2.0,
"Specifies, in seconds, a common duration used in the definition "
"of the MPD Representation data rate.");
DEFINE_double(minimum_update_period,
5.0,
"Indicates to the player how often to refresh the media "
"presentation description in seconds. This value is used for "
"dynamic MPD only.");
DEFINE_double(suggested_presentation_delay,
0.0,
"Specifies a delay, in seconds, to be added to the media "
"presentation time. This value is used for dynamic MPD only.");
DEFINE_string(utc_timings,
"",
"Comma separated UTCTiming schemeIdUri and value pairs for the "
"MPD. This value is used for dynamic MPD only.");
DEFINE_bool(generate_dash_if_iop_compliant_mpd,
true,
"Try to generate DASH-IF IOP compliant MPD. This is best effort "
"and does not guarantee compliance.");
DEFINE_bool(
ABSL_FLAG(bool,
generate_static_live_mpd,
false,
"Set to true to generate static mpd. If segment_template is "
"specified in stream descriptors, shaka-packager generates dynamic "
"mpd by default; if this flag is enabled, shaka-packager generates "
"static mpd instead. Note that if segment_template is not "
"specified, shaka-packager always generates static mpd regardless "
"of the value of this flag.");
ABSL_FLAG(bool,
output_media_info,
false,
"Create a human readable format of MediaInfo. The output file name "
"will be the name specified by output flag, suffixed with "
"'.media_info'.");
ABSL_FLAG(std::string, mpd_output, "", "MPD output file name.");
ABSL_FLAG(std::string,
base_urls,
"",
"Comma separated BaseURLs for the MPD. The values will be added "
"as <BaseURL> element(s) immediately under the <MPD> element.");
ABSL_FLAG(double,
min_buffer_time,
2.0,
"Specifies, in seconds, a common duration used in the definition "
"of the MPD Representation data rate.");
ABSL_FLAG(double,
minimum_update_period,
5.0,
"Indicates to the player how often to refresh the media "
"presentation description in seconds. This value is used for "
"dynamic MPD only.");
ABSL_FLAG(double,
suggested_presentation_delay,
0.0,
"Specifies a delay, in seconds, to be added to the media "
"presentation time. This value is used for dynamic MPD only.");
ABSL_FLAG(std::string,
utc_timings,
"",
"Comma separated UTCTiming schemeIdUri and value pairs for the "
"MPD. This value is used for dynamic MPD only.");
ABSL_FLAG(bool,
generate_dash_if_iop_compliant_mpd,
true,
"Try to generate DASH-IF IOP compliant MPD. This is best effort "
"and does not guarantee compliance.");
ABSL_FLAG(
bool,
allow_approximate_segment_timeline,
false,
"For live profile only. "
@ -59,23 +68,27 @@ DEFINE_bool(
"completely."
"Ignored if $Time$ is used in segment template, since $Time$ requires "
"accurate Segment Timeline.");
DEFINE_bool(allow_codec_switching,
false,
"If enabled, allow adaptive switching between different codecs, "
"if they have the same language, media type (audio, video etc) and "
"container type.");
DEFINE_bool(include_mspr_pro_for_playready,
true,
"If enabled, PlayReady Object <mspr:pro> will be inserted into "
"<ContentProtection ...> element alongside with <cenc:pssh> "
"when using PlayReady protection system.");
DEFINE_bool(dash_force_segment_list,
false,
"Uses SegmentList instead of SegmentBase. Use this if the "
"content is huge and the total number of (sub)segment references "
"is greater than what the sidx atom allows (65535). Currently "
"this flag is only supported in DASH ondemand profile.");
DEFINE_bool(
ABSL_FLAG(bool,
allow_codec_switching,
false,
"If enabled, allow adaptive switching between different codecs, "
"if they have the same language, media type (audio, video etc) and "
"container type.");
ABSL_FLAG(bool,
include_mspr_pro_for_playready,
true,
"If enabled, PlayReady Object <mspr:pro> will be inserted into "
"<ContentProtection ...> element alongside with <cenc:pssh> "
"when using PlayReady protection system.");
ABSL_FLAG(bool,
dash_force_segment_list,
false,
"Uses SegmentList instead of SegmentBase. Use this if the "
"content is huge and the total number of (sub)segment references "
"is greater than what the sidx atom allows (65535). Currently "
"this flag is only supported in DASH ondemand profile.");
ABSL_FLAG(
bool,
low_latency_dash_mode,
false,
"If enabled, LL-DASH streaming will be used, "

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -9,21 +9,22 @@
#ifndef APP_MPD_FLAGS_H_
#define APP_MPD_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_bool(generate_static_live_mpd);
DECLARE_bool(output_media_info);
DECLARE_string(mpd_output);
DECLARE_string(base_urls);
DECLARE_double(minimum_update_period);
DECLARE_double(min_buffer_time);
DECLARE_double(suggested_presentation_delay);
DECLARE_string(utc_timings);
DECLARE_bool(generate_dash_if_iop_compliant_mpd);
DECLARE_bool(allow_approximate_segment_timeline);
DECLARE_bool(allow_codec_switching);
DECLARE_bool(include_mspr_pro_for_playready);
DECLARE_bool(dash_force_segment_list);
DECLARE_bool(low_latency_dash_mode);
ABSL_DECLARE_FLAG(bool, generate_static_live_mpd);
ABSL_DECLARE_FLAG(bool, output_media_info);
ABSL_DECLARE_FLAG(std::string, mpd_output);
ABSL_DECLARE_FLAG(std::string, base_urls);
ABSL_DECLARE_FLAG(double, minimum_update_period);
ABSL_DECLARE_FLAG(double, min_buffer_time);
ABSL_DECLARE_FLAG(double, suggested_presentation_delay);
ABSL_DECLARE_FLAG(std::string, utc_timings);
ABSL_DECLARE_FLAG(bool, generate_dash_if_iop_compliant_mpd);
ABSL_DECLARE_FLAG(bool, allow_approximate_segment_timeline);
ABSL_DECLARE_FLAG(bool, allow_codec_switching);
ABSL_DECLARE_FLAG(bool, include_mspr_pro_for_playready);
ABSL_DECLARE_FLAG(bool, dash_force_segment_list);
ABSL_DECLARE_FLAG(bool, low_latency_dash_mode);
#endif // APP_MPD_FLAGS_H_

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,27 +6,31 @@
#include <iostream>
#include "packager/app/mpd_generator_flags.h"
#include "packager/app/vlog_flags.h"
#include "packager/base/at_exit.h"
#include "packager/base/command_line.h"
#include "packager/base/logging.h"
#include "packager/base/strings/string_split.h"
#include "packager/base/strings/stringprintf.h"
#include "packager/mpd/util/mpd_writer.h"
#include "packager/tools/license_notice.h"
#include "packager/version/version.h"
#if defined(OS_WIN)
#include <codecvt>
#include <functional>
#include <locale>
#endif // defined(OS_WIN)
DEFINE_bool(licenses, false, "Dump licenses.");
DEFINE_string(test_packager_version,
"",
"Packager version for testing. Should be used for testing only.");
#include <absl/flags/parse.h>
#include <absl/flags/usage.h>
#include <absl/flags/usage_config.h>
#include <absl/log/check.h>
#include <absl/log/initialize.h>
#include <absl/log/log.h>
#include <absl/strings/str_format.h>
#include <absl/strings/str_split.h>
#include <packager/app/mpd_generator_flags.h>
#include <packager/app/vlog_flags.h>
#include <packager/mpd/util/mpd_writer.h>
#include <packager/tools/license_notice.h>
#include <packager/version/version.h>
ABSL_FLAG(bool, licenses, false, "Dump licenses.");
ABSL_FLAG(std::string,
test_packager_version,
"",
"Packager version for testing. Should be used for testing only.");
namespace shaka {
namespace {
@ -51,12 +55,12 @@ enum ExitStatus {
};
ExitStatus CheckRequiredFlags() {
if (FLAGS_input.empty()) {
if (absl::GetFlag(FLAGS_input).empty()) {
LOG(ERROR) << "--input is required.";
return kEmptyInputError;
}
if (FLAGS_output.empty()) {
if (absl::GetFlag(FLAGS_output).empty()) {
LOG(ERROR) << "--output is required.";
return kEmptyOutputError;
}
@ -69,12 +73,12 @@ ExitStatus RunMpdGenerator() {
std::vector<std::string> base_urls;
typedef std::vector<std::string>::const_iterator Iterator;
std::vector<std::string> input_files = base::SplitString(
FLAGS_input, ",", base::KEEP_WHITESPACE, base::SPLIT_WANT_ALL);
std::vector<std::string> input_files =
absl::StrSplit(absl::GetFlag(FLAGS_input), ",", absl::AllowEmpty());
if (!FLAGS_base_urls.empty()) {
base_urls = base::SplitString(FLAGS_base_urls, ",", base::KEEP_WHITESPACE,
base::SPLIT_WANT_ALL);
if (!absl::GetFlag(FLAGS_base_urls).empty()) {
base_urls =
absl::StrSplit(absl::GetFlag(FLAGS_base_urls), ",", absl::AllowEmpty());
}
MpdWriter mpd_writer;
@ -87,8 +91,8 @@ ExitStatus RunMpdGenerator() {
}
}
if (!mpd_writer.WriteMpdToFile(FLAGS_output.c_str())) {
LOG(ERROR) << "Failed to write MPD to " << FLAGS_output;
if (!mpd_writer.WriteMpdToFile(absl::GetFlag(FLAGS_output).c_str())) {
LOG(ERROR) << "Failed to write MPD to " << absl::GetFlag(FLAGS_output);
return kFailedToWriteMpdToFileError;
}
@ -96,19 +100,19 @@ ExitStatus RunMpdGenerator() {
}
int MpdMain(int argc, char** argv) {
base::AtExitManager exit;
// Needed to enable VLOG/DVLOG through --vmodule or --v.
base::CommandLine::Init(argc, argv);
absl::FlagsUsageConfig flag_config;
flag_config.version_string = []() -> std::string {
return "mpd_generator version " + GetPackagerVersion() + "\n";
};
flag_config.contains_help_flags =
[](absl::string_view flag_file_name) -> bool { return true; };
absl::SetFlagsUsageConfig(flag_config);
// Set up logging.
logging::LoggingSettings log_settings;
log_settings.logging_dest = logging::LOG_TO_SYSTEM_DEBUG_LOG;
CHECK(logging::InitLogging(log_settings));
auto usage = absl::StrFormat(kUsage, argv[0]);
absl::SetProgramUsageMessage(usage);
absl::ParseCommandLine(argc, argv);
google::SetVersionString(GetPackagerVersion());
google::SetUsageMessage(base::StringPrintf(kUsage, argv[0]));
google::ParseCommandLineFlags(&argc, &argv, true);
if (FLAGS_licenses) {
if (absl::GetFlag(FLAGS_licenses)) {
for (const char* line : kLicenseNotice)
std::cout << line << std::endl;
return kSuccess;
@ -116,12 +120,16 @@ int MpdMain(int argc, char** argv) {
ExitStatus status = CheckRequiredFlags();
if (status != kSuccess) {
google::ShowUsageWithFlags("Usage");
std::cerr << "Usage " << absl::ProgramUsageMessage();
return status;
}
if (!FLAGS_test_packager_version.empty())
SetPackagerVersionForTesting(FLAGS_test_packager_version);
handle_vlog_flags();
absl::InitializeLog();
if (!absl::GetFlag(FLAGS_test_packager_version).empty())
SetPackagerVersionForTesting(absl::GetFlag(FLAGS_test_packager_version));
return RunMpdGenerator();
}
@ -142,12 +150,20 @@ int wmain(int argc, wchar_t* argv[], wchar_t* envp[]) {
delete[] utf8_args;
});
std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
for (int idx = 0; idx < argc; ++idx) {
std::string utf8_arg(converter.to_bytes(argv[idx]));
utf8_arg += '\0';
utf8_argv[idx] = new char[utf8_arg.size()];
memcpy(utf8_argv[idx], &utf8_arg[0], utf8_arg.size());
}
// Because we just converted wide character args into UTF8, and because
// std::filesystem::u8path is used to interpret all std::string paths as
// UTF8, we should set the locale to UTF8 as well, for the transition point
// to C library functions like fopen to work correctly with non-ASCII paths.
std::setlocale(LC_ALL, ".UTF8");
return shaka::MpdMain(argc, utf8_argv.get());
}
#else

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -7,12 +7,16 @@
#ifndef APP_MPD_GENERATOR_FLAGS_H_
#define APP_MPD_GENERATOR_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/flag.h>
DEFINE_string(input, "", "Comma separated list of MediaInfo input files.");
DEFINE_string(output, "", "MPD output file name.");
DEFINE_string(base_urls,
"",
"Comma separated BaseURLs for the MPD. The values will be added "
"as <BaseURL> element(s) immediately under the <MPD> element.");
ABSL_FLAG(std::string,
input,
"",
"Comma separated list of MediaInfo input files.");
ABSL_FLAG(std::string, output, "", "MPD output file name.");
ABSL_FLAG(std::string,
base_urls,
"",
"Comma separated BaseURLs for the MPD. The values will be added "
"as <BaseURL> element(s) immediately under the <MPD> element.");
#endif // APP_MPD_GENERATOR_FLAGS_H_

View File

@ -1,21 +1,19 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/muxer_factory.h"
#include <packager/app/muxer_factory.h>
#include "packager/base/time/clock.h"
#include "packager/media/base/muxer.h"
#include "packager/media/base/muxer_options.h"
#include "packager/media/formats/mp2t/ts_muxer.h"
#include "packager/media/formats/mp4/mp4_muxer.h"
#include "packager/media/formats/packed_audio/packed_audio_writer.h"
#include "packager/media/formats/ttml/ttml_muxer.h"
#include "packager/media/formats/webm/webm_muxer.h"
#include "packager/media/formats/webvtt/webvtt_muxer.h"
#include "packager/packager.h"
#include <packager/media/base/muxer.h>
#include <packager/media/formats/mp2t/ts_muxer.h>
#include <packager/media/formats/mp4/mp4_muxer.h>
#include <packager/media/formats/packed_audio/packed_audio_writer.h>
#include <packager/media/formats/ttml/ttml_muxer.h>
#include <packager/media/formats/webm/webm_muxer.h>
#include <packager/media/formats/webvtt/webvtt_muxer.h>
#include <packager/packager.h>
namespace shaka {
namespace media {
@ -80,7 +78,7 @@ std::shared_ptr<Muxer> MuxerFactory::CreateMuxer(
return muxer;
}
void MuxerFactory::OverrideClock(base::Clock* clock) {
void MuxerFactory::OverrideClock(std::shared_ptr<Clock> clock) {
clock_ = clock;
}
} // namespace media

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -10,12 +10,9 @@
#include <memory>
#include <string>
#include "packager/media/base/container_names.h"
#include "packager/media/public/mp4_output_params.h"
namespace base {
class Clock;
} // namespace base
#include <packager/media/base/container_names.h>
#include <packager/mp4_output_params.h>
#include <packager/mpd/base/mpd_builder.h>
namespace shaka {
struct PackagingParams;
@ -40,7 +37,7 @@ class MuxerFactory {
/// For testing, if you need to replace the clock that muxers work with
/// this will replace the clock for all muxers created after this call.
void OverrideClock(base::Clock* clock);
void OverrideClock(std::shared_ptr<Clock> clock);
void SetTsStreamOffset(int32_t offset_ms) {
transport_stream_timestamp_offset_ms_ = offset_ms;
@ -53,7 +50,7 @@ class MuxerFactory {
const Mp4OutputParams mp4_params_;
const std::string temp_dir_;
int32_t transport_stream_timestamp_offset_ms_ = 0;
base::Clock* clock_ = nullptr;
std::shared_ptr<Clock> clock_ = nullptr;
};
} // namespace media

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,50 +6,59 @@
//
// Defines Muxer flags.
#include "packager/app/muxer_flags.h"
#include <packager/app/muxer_flags.h>
DEFINE_double(clear_lead,
5.0f,
"Clear lead in seconds if encryption is enabled. Note that we do "
"not support partial segment encryption, so it is rounded up to "
"full segments. Set it to a value smaller than segment_duration "
"so only the first segment is in clear since the first segment "
"could be smaller than segment_duration if there is small "
"non-zero starting timestamp.");
DEFINE_double(segment_duration,
6.0f,
"Segment duration in seconds. If single_segment is specified, "
"this parameter sets the duration of a subsegment; otherwise, "
"this parameter sets the duration of a segment. Actual segment "
"durations may not be exactly as requested.");
DEFINE_bool(segment_sap_aligned,
true,
"Force segments to begin with stream access points.");
DEFINE_double(fragment_duration,
0,
"Fragment duration in seconds. Should not be larger than "
"the segment duration. Actual fragment durations may not be "
"exactly as requested.");
DEFINE_bool(fragment_sap_aligned,
true,
"Force fragments to begin with stream access points. This flag "
"implies segment_sap_aligned.");
DEFINE_bool(generate_sidx_in_media_segments,
true,
"Indicates whether to generate 'sidx' box in media segments. Note "
"that it is required for DASH on-demand profile (not using segment "
"template).");
DEFINE_string(temp_dir,
"",
"Specify a directory in which to store temporary (intermediate) "
" files. Used only if single_segment=true.");
DEFINE_bool(mp4_include_pssh_in_stream,
true,
"MP4 only: include pssh in the encrypted stream.");
DEFINE_int32(transport_stream_timestamp_offset_ms,
100,
"A positive value, in milliseconds, by which output timestamps "
"are offset to compensate for possible negative timestamps in the "
"input. For example, timestamps from ISO-BMFF after adjusted by "
"EditList could be negative. In transport streams, timestamps are "
"not allowed to be less than zero.");
ABSL_FLAG(double,
clear_lead,
5.0f,
"Clear lead in seconds if encryption is enabled. Note that we do "
"not support partial segment encryption, so it is rounded up to "
"full segments. Set it to a value smaller than segment_duration "
"so only the first segment is in clear since the first segment "
"could be smaller than segment_duration if there is small "
"non-zero starting timestamp.");
ABSL_FLAG(double,
segment_duration,
6.0f,
"Segment duration in seconds. If single_segment is specified, "
"this parameter sets the duration of a subsegment; otherwise, "
"this parameter sets the duration of a segment. Actual segment "
"durations may not be exactly as requested.");
ABSL_FLAG(bool,
segment_sap_aligned,
true,
"Force segments to begin with stream access points.");
ABSL_FLAG(double,
fragment_duration,
0,
"Fragment duration in seconds. Should not be larger than "
"the segment duration. Actual fragment durations may not be "
"exactly as requested.");
ABSL_FLAG(bool,
fragment_sap_aligned,
true,
"Force fragments to begin with stream access points. This flag "
"implies segment_sap_aligned.");
ABSL_FLAG(bool,
generate_sidx_in_media_segments,
true,
"Indicates whether to generate 'sidx' box in media segments. Note "
"that it is required for DASH on-demand profile (not using segment "
"template).");
ABSL_FLAG(std::string,
temp_dir,
"",
"Specify a directory in which to store temporary (intermediate) "
" files. Used only if single_segment=true.");
ABSL_FLAG(bool,
mp4_include_pssh_in_stream,
true,
"MP4 only: include pssh in the encrypted stream.");
ABSL_FLAG(int32_t,
transport_stream_timestamp_offset_ms,
100,
"A positive value, in milliseconds, by which output timestamps "
"are offset to compensate for possible negative timestamps in the "
"input. For example, timestamps from ISO-BMFF after adjusted by "
"EditList could be negative. In transport streams, timestamps are "
"not allowed to be less than zero.");

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -9,16 +9,17 @@
#ifndef APP_MUXER_FLAGS_H_
#define APP_MUXER_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_double(clear_lead);
DECLARE_double(segment_duration);
DECLARE_bool(segment_sap_aligned);
DECLARE_double(fragment_duration);
DECLARE_bool(fragment_sap_aligned);
DECLARE_bool(generate_sidx_in_media_segments);
DECLARE_string(temp_dir);
DECLARE_bool(mp4_include_pssh_in_stream);
DECLARE_int32(transport_stream_timestamp_offset_ms);
ABSL_DECLARE_FLAG(double, clear_lead);
ABSL_DECLARE_FLAG(double, segment_duration);
ABSL_DECLARE_FLAG(bool, segment_sap_aligned);
ABSL_DECLARE_FLAG(double, fragment_duration);
ABSL_DECLARE_FLAG(bool, fragment_sap_aligned);
ABSL_DECLARE_FLAG(bool, generate_sidx_in_media_segments);
ABSL_DECLARE_FLAG(std::string, temp_dir);
ABSL_DECLARE_FLAG(bool, mp4_include_pssh_in_stream);
ABSL_DECLARE_FLAG(int32_t, transport_stream_timestamp_offset_ms);
#endif // APP_MUXER_FLAGS_H_

View File

@ -1,56 +1,63 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include <gflags/gflags.h>
#include <iostream>
#include "packager/app/ad_cue_generator_flags.h"
#include "packager/app/crypto_flags.h"
#include "packager/app/hls_flags.h"
#include "packager/app/manifest_flags.h"
#include "packager/app/mpd_flags.h"
#include "packager/app/muxer_flags.h"
#include "packager/app/packager_util.h"
#include "packager/app/playready_key_encryption_flags.h"
#include "packager/app/protection_system_flags.h"
#include "packager/app/raw_key_encryption_flags.h"
#include "packager/app/stream_descriptor.h"
#include "packager/app/vlog_flags.h"
#include "packager/app/widevine_encryption_flags.h"
#include "packager/base/command_line.h"
#include "packager/base/logging.h"
#include "packager/base/optional.h"
#include "packager/base/strings/string_number_conversions.h"
#include "packager/base/strings/string_split.h"
#include "packager/base/strings/string_util.h"
#include "packager/base/strings/stringprintf.h"
#include "packager/file/file.h"
#include "packager/packager.h"
#include "packager/tools/license_notice.h"
#include <optional>
#if defined(OS_WIN)
#include <codecvt>
#include <functional>
#include <locale>
#endif // defined(OS_WIN)
DEFINE_bool(dump_stream_info, false, "Dump demuxed stream info.");
DEFINE_bool(licenses, false, "Dump licenses.");
DEFINE_bool(quiet, false, "When enabled, LOG(INFO) output is suppressed.");
DEFINE_bool(use_fake_clock_for_muxer,
false,
"Set to true to use a fake clock for muxer. With this flag set, "
"creation time and modification time in outputs are set to 0. "
"Should only be used for testing.");
DEFINE_string(test_packager_version,
"",
"Packager version for testing. Should be used for testing only.");
DEFINE_bool(single_threaded,
false,
"If enabled, only use one thread when generating content.");
#include <absl/flags/flag.h>
#include <absl/flags/parse.h>
#include <absl/flags/usage.h>
#include <absl/flags/usage_config.h>
#include <absl/log/globals.h>
#include <absl/log/initialize.h>
#include <absl/log/log.h>
#include <absl/strings/numbers.h>
#include <absl/strings/str_format.h>
#include <absl/strings/str_split.h>
#include <packager/app/ad_cue_generator_flags.h>
#include <packager/app/crypto_flags.h>
#include <packager/app/hls_flags.h>
#include <packager/app/manifest_flags.h>
#include <packager/app/mpd_flags.h>
#include <packager/app/muxer_flags.h>
#include <packager/app/playready_key_encryption_flags.h>
#include <packager/app/protection_system_flags.h>
#include <packager/app/raw_key_encryption_flags.h>
#include <packager/app/retired_flags.h>
#include <packager/app/stream_descriptor.h>
#include <packager/app/vlog_flags.h>
#include <packager/app/widevine_encryption_flags.h>
#include <packager/file.h>
#include <packager/kv_pairs/kv_pairs.h>
#include <packager/tools/license_notice.h>
#include <packager/utils/string_trim_split.h>
ABSL_FLAG(bool, dump_stream_info, false, "Dump demuxed stream info.");
ABSL_FLAG(bool, licenses, false, "Dump licenses.");
ABSL_FLAG(bool, quiet, false, "When enabled, LOG(INFO) output is suppressed.");
ABSL_FLAG(bool,
use_fake_clock_for_muxer,
false,
"Set to true to use a fake clock for muxer. With this flag set, "
"creation time and modification time in outputs are set to 0. "
"Should only be used for testing.");
ABSL_FLAG(std::string,
test_packager_version,
"",
"Packager version for testing. Should be used for testing only.");
ABSL_FLAG(bool,
single_threaded,
false,
"If enabled, only use one thread when generating content.");
namespace shaka {
namespace {
@ -131,17 +138,18 @@ enum ExitStatus {
};
bool GetWidevineSigner(WidevineSigner* signer) {
signer->signer_name = FLAGS_signer;
if (!FLAGS_aes_signing_key_bytes.empty()) {
signer->signer_name = absl::GetFlag(FLAGS_signer);
if (!absl::GetFlag(FLAGS_aes_signing_key).bytes.empty()) {
signer->signing_key_type = WidevineSigner::SigningKeyType::kAes;
signer->aes.key = FLAGS_aes_signing_key_bytes;
signer->aes.iv = FLAGS_aes_signing_iv_bytes;
} else if (!FLAGS_rsa_signing_key_path.empty()) {
signer->aes.key = absl::GetFlag(FLAGS_aes_signing_key).bytes;
signer->aes.iv = absl::GetFlag(FLAGS_aes_signing_iv).bytes;
} else if (!absl::GetFlag(FLAGS_rsa_signing_key_path).empty()) {
signer->signing_key_type = WidevineSigner::SigningKeyType::kRsa;
if (!File::ReadFileToString(FLAGS_rsa_signing_key_path.c_str(),
&signer->rsa.key)) {
LOG(ERROR) << "Failed to read from '" << FLAGS_rsa_signing_key_path
<< "'.";
if (!File::ReadFileToString(
absl::GetFlag(FLAGS_rsa_signing_key_path).c_str(),
&signer->rsa.key)) {
LOG(ERROR) << "Failed to read from '"
<< absl::GetFlag(FLAGS_rsa_signing_key_path) << "'.";
return false;
}
}
@ -150,11 +158,11 @@ bool GetWidevineSigner(WidevineSigner* signer) {
bool GetHlsPlaylistType(const std::string& playlist_type,
HlsPlaylistType* playlist_type_enum) {
if (base::ToUpperASCII(playlist_type) == "VOD") {
if (absl::AsciiStrToUpper(playlist_type) == "VOD") {
*playlist_type_enum = HlsPlaylistType::kVod;
} else if (base::ToUpperASCII(playlist_type) == "LIVE") {
} else if (absl::AsciiStrToUpper(playlist_type) == "LIVE") {
*playlist_type_enum = HlsPlaylistType::kLive;
} else if (base::ToUpperASCII(playlist_type) == "EVENT") {
} else if (absl::AsciiStrToUpper(playlist_type) == "EVENT") {
*playlist_type_enum = HlsPlaylistType::kEvent;
} else {
LOG(ERROR) << "Unrecognized playlist type " << playlist_type;
@ -164,31 +172,33 @@ bool GetHlsPlaylistType(const std::string& playlist_type,
}
bool GetProtectionScheme(uint32_t* protection_scheme) {
if (FLAGS_protection_scheme == "cenc") {
if (absl::GetFlag(FLAGS_protection_scheme) == "cenc") {
*protection_scheme = EncryptionParams::kProtectionSchemeCenc;
return true;
}
if (FLAGS_protection_scheme == "cbc1") {
if (absl::GetFlag(FLAGS_protection_scheme) == "cbc1") {
*protection_scheme = EncryptionParams::kProtectionSchemeCbc1;
return true;
}
if (FLAGS_protection_scheme == "cbcs") {
if (absl::GetFlag(FLAGS_protection_scheme) == "cbcs") {
*protection_scheme = EncryptionParams::kProtectionSchemeCbcs;
return true;
}
if (FLAGS_protection_scheme == "cens") {
if (absl::GetFlag(FLAGS_protection_scheme) == "cens") {
*protection_scheme = EncryptionParams::kProtectionSchemeCens;
return true;
}
LOG(ERROR) << "Unrecognized protection_scheme " << FLAGS_protection_scheme;
LOG(ERROR) << "Unrecognized protection_scheme "
<< absl::GetFlag(FLAGS_protection_scheme);
return false;
}
bool ParseKeys(const std::string& keys, RawKeyParams* raw_key) {
for (const std::string& key_data : base::SplitString(
keys, ",", base::TRIM_WHITESPACE, base::SPLIT_WANT_NONEMPTY)) {
base::StringPairs string_pairs;
base::SplitStringIntoKeyValuePairs(key_data, '=', ':', &string_pairs);
std::vector<std::string> keys_data = SplitAndTrimSkipEmpty(keys, ',');
for (const std::string& key_data : keys_data) {
std::vector<KVPair> string_pairs =
SplitStringIntoKeyValuePairs(key_data, '=', ':');
std::map<std::string, std::string> value_map;
for (const auto& string_pair : string_pairs)
@ -200,13 +210,14 @@ bool ParseKeys(const std::string& keys, RawKeyParams* raw_key) {
}
auto& key_info = raw_key->key_map[drm_label];
if (value_map[kKeyIdLabel].empty() ||
!base::HexStringToBytes(value_map[kKeyIdLabel], &key_info.key_id)) {
!shaka::ValidHexStringToBytes(value_map[kKeyIdLabel],
&key_info.key_id)) {
LOG(ERROR) << "Empty key id or invalid hex string for key id: "
<< value_map[kKeyIdLabel];
return false;
}
if (value_map[kKeyLabel].empty() ||
!base::HexStringToBytes(value_map[kKeyLabel], &key_info.key)) {
!shaka::ValidHexStringToBytes(value_map[kKeyLabel], &key_info.key)) {
LOG(ERROR) << "Empty key or invalid hex string for key: "
<< value_map[kKeyLabel];
return false;
@ -216,7 +227,7 @@ bool ParseKeys(const std::string& keys, RawKeyParams* raw_key) {
LOG(ERROR) << "IV already specified with --iv";
return false;
}
if (!base::HexStringToBytes(value_map[kKeyIvLabel], &key_info.iv)) {
if (!shaka::ValidHexStringToBytes(value_map[kKeyIvLabel], &key_info.iv)) {
LOG(ERROR) << "Empty IV or invalid hex string for IV: "
<< value_map[kKeyIvLabel];
return false;
@ -227,18 +238,18 @@ bool ParseKeys(const std::string& keys, RawKeyParams* raw_key) {
}
bool GetRawKeyParams(RawKeyParams* raw_key) {
raw_key->iv = FLAGS_iv_bytes;
raw_key->pssh = FLAGS_pssh_bytes;
if (!FLAGS_keys.empty()) {
if (!ParseKeys(FLAGS_keys, raw_key)) {
LOG(ERROR) << "Failed to parse --keys " << FLAGS_keys;
raw_key->iv = absl::GetFlag(FLAGS_iv).bytes;
raw_key->pssh = absl::GetFlag(FLAGS_pssh).bytes;
if (!absl::GetFlag(FLAGS_keys).empty()) {
if (!ParseKeys(absl::GetFlag(FLAGS_keys), raw_key)) {
LOG(ERROR) << "Failed to parse --keys " << absl::GetFlag(FLAGS_keys);
return false;
}
} else {
// An empty StreamLabel specifies the default key info.
RawKeyParams::KeyInfo& key_info = raw_key->key_map[""];
key_info.key_id = FLAGS_key_id_bytes;
key_info.key = FLAGS_key_bytes;
key_info.key_id = absl::GetFlag(FLAGS_key_id).bytes;
key_info.key = absl::GetFlag(FLAGS_key).bytes;
}
return true;
}
@ -246,26 +257,25 @@ bool GetRawKeyParams(RawKeyParams* raw_key) {
bool ParseAdCues(const std::string& ad_cues, std::vector<Cuepoint>* cuepoints) {
// Track if optional field is supplied consistently across all cue points.
size_t duration_count = 0;
std::vector<std::string> ad_cues_vec = SplitAndTrimSkipEmpty(ad_cues, ';');
for (const std::string& ad_cue : base::SplitString(
ad_cues, ";", base::TRIM_WHITESPACE, base::SPLIT_WANT_NONEMPTY)) {
for (const std::string& ad_cue : ad_cues_vec) {
Cuepoint cuepoint;
auto split_ad_cue = base::SplitString(ad_cue, ",", base::TRIM_WHITESPACE,
base::SPLIT_WANT_NONEMPTY);
std::vector<std::string> split_ad_cue = SplitAndTrimSkipEmpty(ad_cue, ',');
if (split_ad_cue.size() > 2) {
LOG(ERROR) << "Failed to parse --ad_cues " << ad_cues
<< " Each ad cue must contain no more than 2 components.";
}
if (!base::StringToDouble(split_ad_cue.front(),
&cuepoint.start_time_in_seconds)) {
if (!absl::SimpleAtod(split_ad_cue.front(),
&cuepoint.start_time_in_seconds)) {
LOG(ERROR) << "Failed to parse --ad_cues " << ad_cues
<< " Start time component must be of type double.";
return false;
}
if (split_ad_cue.size() > 1) {
duration_count++;
if (!base::StringToDouble(split_ad_cue[1],
&cuepoint.duration_in_seconds)) {
if (!absl::SimpleAtod(split_ad_cue[1], &cuepoint.duration_in_seconds)) {
LOG(ERROR) << "Failed to parse --ad_cues " << ad_cues
<< " Duration component must be of type double.";
return false;
@ -296,9 +306,10 @@ bool ParseProtectionSystems(const std::string& protection_systems_str,
{"widevine", ProtectionSystem::kWidevine},
};
for (const std::string& protection_system :
base::SplitString(base::ToLowerASCII(protection_systems_str), ",",
base::TRIM_WHITESPACE, base::SPLIT_WANT_NONEMPTY)) {
std::vector<std::string> protection_systems_vec =
SplitAndTrimSkipEmpty(absl::AsciiStrToLower(protection_systems_str), ',');
for (const std::string& protection_system : protection_systems_vec) {
auto iter = mapping.find(protection_system);
if (iter == mapping.end()) {
LOG(ERROR) << "Seeing unrecognized protection system: "
@ -310,36 +321,42 @@ bool ParseProtectionSystems(const std::string& protection_systems_str,
return true;
}
base::Optional<PackagingParams> GetPackagingParams() {
std::optional<PackagingParams> GetPackagingParams() {
PackagingParams packaging_params;
packaging_params.temp_dir = FLAGS_temp_dir;
packaging_params.single_threaded = FLAGS_single_threaded;
packaging_params.temp_dir = absl::GetFlag(FLAGS_temp_dir);
packaging_params.single_threaded = absl::GetFlag(FLAGS_single_threaded);
AdCueGeneratorParams& ad_cue_generator_params =
packaging_params.ad_cue_generator_params;
if (!ParseAdCues(FLAGS_ad_cues, &ad_cue_generator_params.cue_points)) {
return base::nullopt;
if (!ParseAdCues(absl::GetFlag(FLAGS_ad_cues),
&ad_cue_generator_params.cue_points)) {
return std::nullopt;
}
ChunkingParams& chunking_params = packaging_params.chunking_params;
chunking_params.segment_duration_in_seconds = FLAGS_segment_duration;
chunking_params.subsegment_duration_in_seconds = FLAGS_fragment_duration;
chunking_params.low_latency_dash_mode = FLAGS_low_latency_dash_mode;
chunking_params.segment_sap_aligned = FLAGS_segment_sap_aligned;
chunking_params.subsegment_sap_aligned = FLAGS_fragment_sap_aligned;
chunking_params.segment_duration_in_seconds =
absl::GetFlag(FLAGS_segment_duration);
chunking_params.subsegment_duration_in_seconds =
absl::GetFlag(FLAGS_fragment_duration);
chunking_params.low_latency_dash_mode =
absl::GetFlag(FLAGS_low_latency_dash_mode);
chunking_params.segment_sap_aligned =
absl::GetFlag(FLAGS_segment_sap_aligned);
chunking_params.subsegment_sap_aligned =
absl::GetFlag(FLAGS_fragment_sap_aligned);
int num_key_providers = 0;
EncryptionParams& encryption_params = packaging_params.encryption_params;
if (FLAGS_enable_widevine_encryption) {
if (absl::GetFlag(FLAGS_enable_widevine_encryption)) {
encryption_params.key_provider = KeyProvider::kWidevine;
++num_key_providers;
}
if (FLAGS_enable_playready_encryption) {
if (absl::GetFlag(FLAGS_enable_playready_encryption)) {
encryption_params.key_provider = KeyProvider::kPlayReady;
++num_key_providers;
}
if (FLAGS_enable_raw_key_encryption) {
if (absl::GetFlag(FLAGS_enable_raw_key_encryption)) {
encryption_params.key_provider = KeyProvider::kRawKey;
++num_key_providers;
}
@ -347,52 +364,55 @@ base::Optional<PackagingParams> GetPackagingParams() {
LOG(ERROR) << "Only one of --enable_widevine_encryption, "
"--enable_playready_encryption, "
"--enable_raw_key_encryption can be enabled.";
return base::nullopt;
return std::nullopt;
}
if (!ParseProtectionSystems(FLAGS_protection_systems,
if (!ParseProtectionSystems(absl::GetFlag(FLAGS_protection_systems),
&encryption_params.protection_systems)) {
return base::nullopt;
return std::nullopt;
}
if (encryption_params.key_provider != KeyProvider::kNone) {
encryption_params.clear_lead_in_seconds = FLAGS_clear_lead;
encryption_params.clear_lead_in_seconds = absl::GetFlag(FLAGS_clear_lead);
if (!GetProtectionScheme(&encryption_params.protection_scheme))
return base::nullopt;
encryption_params.crypt_byte_block = FLAGS_crypt_byte_block;
encryption_params.skip_byte_block = FLAGS_skip_byte_block;
return std::nullopt;
encryption_params.crypt_byte_block = absl::GetFlag(FLAGS_crypt_byte_block);
encryption_params.skip_byte_block = absl::GetFlag(FLAGS_skip_byte_block);
encryption_params.crypto_period_duration_in_seconds =
FLAGS_crypto_period_duration;
encryption_params.vp9_subsample_encryption = FLAGS_vp9_subsample_encryption;
absl::GetFlag(FLAGS_crypto_period_duration);
encryption_params.vp9_subsample_encryption =
absl::GetFlag(FLAGS_vp9_subsample_encryption);
encryption_params.stream_label_func = std::bind(
&Packager::DefaultStreamLabelFunction, FLAGS_max_sd_pixels,
FLAGS_max_hd_pixels, FLAGS_max_uhd1_pixels, std::placeholders::_1);
&Packager::DefaultStreamLabelFunction,
absl::GetFlag(FLAGS_max_sd_pixels), absl::GetFlag(FLAGS_max_hd_pixels),
absl::GetFlag(FLAGS_max_uhd1_pixels), std::placeholders::_1);
encryption_params.playready_extra_header_data =
FLAGS_playready_extra_header_data;
absl::GetFlag(FLAGS_playready_extra_header_data);
}
switch (encryption_params.key_provider) {
case KeyProvider::kWidevine: {
WidevineEncryptionParams& widevine = encryption_params.widevine;
widevine.key_server_url = FLAGS_key_server_url;
widevine.key_server_url = absl::GetFlag(FLAGS_key_server_url);
widevine.content_id = FLAGS_content_id_bytes;
widevine.policy = FLAGS_policy;
widevine.group_id = FLAGS_group_id_bytes;
widevine.enable_entitlement_license = FLAGS_enable_entitlement_license;
widevine.content_id = absl::GetFlag(FLAGS_content_id).bytes;
widevine.policy = absl::GetFlag(FLAGS_policy);
widevine.group_id = absl::GetFlag(FLAGS_group_id).bytes;
widevine.enable_entitlement_license =
absl::GetFlag(FLAGS_enable_entitlement_license);
if (!GetWidevineSigner(&widevine.signer))
return base::nullopt;
return std::nullopt;
break;
}
case KeyProvider::kPlayReady: {
PlayReadyEncryptionParams& playready = encryption_params.playready;
playready.key_server_url = FLAGS_playready_server_url;
playready.program_identifier = FLAGS_program_identifier;
playready.key_server_url = absl::GetFlag(FLAGS_playready_server_url);
playready.program_identifier = absl::GetFlag(FLAGS_program_identifier);
break;
}
case KeyProvider::kRawKey: {
if (!GetRawKeyParams(&encryption_params.raw_key))
return base::nullopt;
return std::nullopt;
break;
}
case KeyProvider::kNone:
@ -401,30 +421,30 @@ base::Optional<PackagingParams> GetPackagingParams() {
num_key_providers = 0;
DecryptionParams& decryption_params = packaging_params.decryption_params;
if (FLAGS_enable_widevine_decryption) {
if (absl::GetFlag(FLAGS_enable_widevine_decryption)) {
decryption_params.key_provider = KeyProvider::kWidevine;
++num_key_providers;
}
if (FLAGS_enable_raw_key_decryption) {
if (absl::GetFlag(FLAGS_enable_raw_key_decryption)) {
decryption_params.key_provider = KeyProvider::kRawKey;
++num_key_providers;
}
if (num_key_providers > 1) {
LOG(ERROR) << "Only one of --enable_widevine_decryption, "
"--enable_raw_key_decryption can be enabled.";
return base::nullopt;
return std::nullopt;
}
switch (decryption_params.key_provider) {
case KeyProvider::kWidevine: {
WidevineDecryptionParams& widevine = decryption_params.widevine;
widevine.key_server_url = FLAGS_key_server_url;
widevine.key_server_url = absl::GetFlag(FLAGS_key_server_url);
if (!GetWidevineSigner(&widevine.signer))
return base::nullopt;
return std::nullopt;
break;
}
case KeyProvider::kRawKey: {
if (!GetRawKeyParams(&decryption_params.raw_key))
return base::nullopt;
return std::nullopt;
break;
}
case KeyProvider::kPlayReady:
@ -434,110 +454,132 @@ base::Optional<PackagingParams> GetPackagingParams() {
Mp4OutputParams& mp4_params = packaging_params.mp4_output_params;
mp4_params.generate_sidx_in_media_segments =
FLAGS_generate_sidx_in_media_segments;
mp4_params.include_pssh_in_stream = FLAGS_mp4_include_pssh_in_stream;
mp4_params.low_latency_dash_mode = FLAGS_low_latency_dash_mode;
absl::GetFlag(FLAGS_generate_sidx_in_media_segments);
mp4_params.include_pssh_in_stream =
absl::GetFlag(FLAGS_mp4_include_pssh_in_stream);
mp4_params.low_latency_dash_mode = absl::GetFlag(FLAGS_low_latency_dash_mode);
packaging_params.transport_stream_timestamp_offset_ms =
FLAGS_transport_stream_timestamp_offset_ms;
absl::GetFlag(FLAGS_transport_stream_timestamp_offset_ms);
packaging_params.output_media_info = FLAGS_output_media_info;
packaging_params.output_media_info = absl::GetFlag(FLAGS_output_media_info);
MpdParams& mpd_params = packaging_params.mpd_params;
mpd_params.mpd_output = FLAGS_mpd_output;
mpd_params.base_urls = base::SplitString(
FLAGS_base_urls, ",", base::TRIM_WHITESPACE, base::SPLIT_WANT_NONEMPTY);
mpd_params.min_buffer_time = FLAGS_min_buffer_time;
mpd_params.minimum_update_period = FLAGS_minimum_update_period;
mpd_params.suggested_presentation_delay = FLAGS_suggested_presentation_delay;
mpd_params.time_shift_buffer_depth = FLAGS_time_shift_buffer_depth;
mpd_params.preserved_segments_outside_live_window =
FLAGS_preserved_segments_outside_live_window;
mpd_params.use_segment_list = FLAGS_dash_force_segment_list;
mpd_params.mpd_output = absl::GetFlag(FLAGS_mpd_output);
if (!FLAGS_utc_timings.empty()) {
base::StringPairs pairs;
if (!base::SplitStringIntoKeyValuePairs(FLAGS_utc_timings, '=', ',',
&pairs)) {
std::vector<std::string> base_urls =
SplitAndTrimSkipEmpty(absl::GetFlag(FLAGS_base_urls), ',');
mpd_params.base_urls = base_urls;
mpd_params.min_buffer_time = absl::GetFlag(FLAGS_min_buffer_time);
mpd_params.minimum_update_period = absl::GetFlag(FLAGS_minimum_update_period);
mpd_params.suggested_presentation_delay =
absl::GetFlag(FLAGS_suggested_presentation_delay);
mpd_params.time_shift_buffer_depth =
absl::GetFlag(FLAGS_time_shift_buffer_depth);
mpd_params.preserved_segments_outside_live_window =
absl::GetFlag(FLAGS_preserved_segments_outside_live_window);
mpd_params.use_segment_list = absl::GetFlag(FLAGS_dash_force_segment_list);
if (!absl::GetFlag(FLAGS_utc_timings).empty()) {
std::vector<KVPair> pairs = SplitStringIntoKeyValuePairs(
absl::GetFlag(FLAGS_utc_timings), '=', ',');
if (pairs.empty()) {
LOG(ERROR) << "Invalid --utc_timings scheme_id_uri/value pairs.";
return base::nullopt;
return std::nullopt;
}
for (const auto& string_pair : pairs) {
mpd_params.utc_timings.push_back({string_pair.first, string_pair.second});
}
}
mpd_params.default_language = FLAGS_default_language;
mpd_params.default_text_language = FLAGS_default_text_language;
mpd_params.generate_static_live_mpd = FLAGS_generate_static_live_mpd;
mpd_params.default_language = absl::GetFlag(FLAGS_default_language);
mpd_params.default_text_language = absl::GetFlag(FLAGS_default_text_language);
mpd_params.generate_static_live_mpd =
absl::GetFlag(FLAGS_generate_static_live_mpd);
mpd_params.generate_dash_if_iop_compliant_mpd =
FLAGS_generate_dash_if_iop_compliant_mpd;
absl::GetFlag(FLAGS_generate_dash_if_iop_compliant_mpd);
mpd_params.allow_approximate_segment_timeline =
FLAGS_allow_approximate_segment_timeline;
mpd_params.allow_codec_switching = FLAGS_allow_codec_switching;
mpd_params.include_mspr_pro = FLAGS_include_mspr_pro_for_playready;
mpd_params.low_latency_dash_mode = FLAGS_low_latency_dash_mode;
absl::GetFlag(FLAGS_allow_approximate_segment_timeline);
mpd_params.allow_codec_switching = absl::GetFlag(FLAGS_allow_codec_switching);
mpd_params.include_mspr_pro =
absl::GetFlag(FLAGS_include_mspr_pro_for_playready);
mpd_params.low_latency_dash_mode = absl::GetFlag(FLAGS_low_latency_dash_mode);
HlsParams& hls_params = packaging_params.hls_params;
if (!GetHlsPlaylistType(FLAGS_hls_playlist_type, &hls_params.playlist_type)) {
return base::nullopt;
if (!GetHlsPlaylistType(absl::GetFlag(FLAGS_hls_playlist_type),
&hls_params.playlist_type)) {
return std::nullopt;
}
hls_params.master_playlist_output = FLAGS_hls_master_playlist_output;
hls_params.base_url = FLAGS_hls_base_url;
hls_params.key_uri = FLAGS_hls_key_uri;
hls_params.time_shift_buffer_depth = FLAGS_time_shift_buffer_depth;
hls_params.master_playlist_output =
absl::GetFlag(FLAGS_hls_master_playlist_output);
hls_params.base_url = absl::GetFlag(FLAGS_hls_base_url);
hls_params.key_uri = absl::GetFlag(FLAGS_hls_key_uri);
hls_params.time_shift_buffer_depth =
absl::GetFlag(FLAGS_time_shift_buffer_depth);
hls_params.preserved_segments_outside_live_window =
FLAGS_preserved_segments_outside_live_window;
hls_params.default_language = FLAGS_default_language;
hls_params.default_text_language = FLAGS_default_text_language;
hls_params.media_sequence_number = FLAGS_hls_media_sequence_number;
absl::GetFlag(FLAGS_preserved_segments_outside_live_window);
hls_params.default_language = absl::GetFlag(FLAGS_default_language);
hls_params.default_text_language = absl::GetFlag(FLAGS_default_text_language);
hls_params.media_sequence_number =
absl::GetFlag(FLAGS_hls_media_sequence_number);
TestParams& test_params = packaging_params.test_params;
test_params.dump_stream_info = FLAGS_dump_stream_info;
test_params.inject_fake_clock = FLAGS_use_fake_clock_for_muxer;
if (!FLAGS_test_packager_version.empty())
test_params.injected_library_version = FLAGS_test_packager_version;
test_params.dump_stream_info = absl::GetFlag(FLAGS_dump_stream_info);
test_params.inject_fake_clock = absl::GetFlag(FLAGS_use_fake_clock_for_muxer);
if (!absl::GetFlag(FLAGS_test_packager_version).empty())
test_params.injected_library_version =
absl::GetFlag(FLAGS_test_packager_version);
return packaging_params;
}
int PackagerMain(int argc, char** argv) {
// Needed to enable VLOG/DVLOG through --vmodule or --v.
base::CommandLine::Init(argc, argv);
absl::FlagsUsageConfig flag_config;
flag_config.version_string = []() -> std::string {
return "packager version " + shaka::Packager::GetLibraryVersion() + "\n";
};
flag_config.contains_help_flags =
[](absl::string_view flag_file_name) -> bool { return true; };
absl::SetFlagsUsageConfig(flag_config);
// Set up logging.
logging::LoggingSettings log_settings;
log_settings.logging_dest = logging::LOG_TO_SYSTEM_DEBUG_LOG;
CHECK(logging::InitLogging(log_settings));
auto usage = absl::StrFormat(kUsage, argv[0]);
absl::SetProgramUsageMessage(usage);
google::SetVersionString(shaka::Packager::GetLibraryVersion());
google::SetUsageMessage(base::StringPrintf(kUsage, argv[0]));
google::ParseCommandLineFlags(&argc, &argv, true);
if (FLAGS_licenses) {
auto remaining_args = absl::ParseCommandLine(argc, argv);
if (absl::GetFlag(FLAGS_licenses)) {
for (const char* line : kLicenseNotice)
std::cout << line << std::endl;
return kSuccess;
}
if (argc < 2) {
google::ShowUsageWithFlags("Usage");
if (remaining_args.size() < 2) {
std::cerr << "Usage: " << absl::ProgramUsageMessage();
return kSuccess;
}
if (FLAGS_quiet)
logging::SetMinLogLevel(logging::LOG_WARNING);
if (absl::GetFlag(FLAGS_quiet)) {
absl::SetMinLogLevel(absl::LogSeverityAtLeast::kWarning);
}
handle_vlog_flags();
absl::InitializeLog();
if (!ValidateWidevineCryptoFlags() || !ValidateRawKeyCryptoFlags() ||
!ValidatePRCryptoFlags()) {
!ValidatePRCryptoFlags() || !ValidateCryptoFlags() ||
!ValidateRetiredFlags()) {
return kArgumentValidationFailed;
}
base::Optional<PackagingParams> packaging_params = GetPackagingParams();
std::optional<PackagingParams> packaging_params = GetPackagingParams();
if (!packaging_params)
return kArgumentValidationFailed;
std::vector<StreamDescriptor> stream_descriptors;
for (int i = 1; i < argc; ++i) {
base::Optional<StreamDescriptor> stream_descriptor =
ParseStreamDescriptor(argv[i]);
for (size_t i = 1; i < remaining_args.size(); ++i) {
std::optional<StreamDescriptor> stream_descriptor =
ParseStreamDescriptor(remaining_args[i]);
if (!stream_descriptor)
return kArgumentValidationFailed;
stream_descriptors.push_back(stream_descriptor.value());
@ -554,7 +596,7 @@ int PackagerMain(int argc, char** argv) {
LOG(ERROR) << "Packaging Error: " << status.ToString();
return kPackagingFailed;
}
if (!FLAGS_quiet)
if (!absl::GetFlag(FLAGS_quiet))
printf("Packaging completed successfully.\n");
return kSuccess;
}
@ -575,12 +617,20 @@ int wmain(int argc, wchar_t* argv[], wchar_t* envp[]) {
delete[] utf8_args;
});
std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
for (int idx = 0; idx < argc; ++idx) {
std::string utf8_arg(converter.to_bytes(argv[idx]));
utf8_arg += '\0';
utf8_argv[idx] = new char[utf8_arg.size()];
memcpy(utf8_argv[idx], &utf8_arg[0], utf8_arg.size());
}
// Because we just converted wide character args into UTF8, and because
// std::filesystem::u8path is used to interpret all std::string paths as
// UTF8, we should set the locale to UTF8 as well, for the transition point
// to C library functions like fopen to work correctly with non-ASCII paths.
std::setlocale(LC_ALL, ".UTF8");
return shaka::PackagerMain(argc, utf8_argv.get());
}
#else

View File

@ -1,23 +1,21 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/packager_util.h"
#include <packager/app/packager_util.h>
#include "packager/base/logging.h"
#include "packager/base/strings/string_number_conversions.h"
#include "packager/base/strings/string_split.h"
#include "packager/file/file.h"
#include "packager/media/base/media_handler.h"
#include "packager/media/base/muxer_options.h"
#include "packager/media/base/playready_key_source.h"
#include "packager/media/base/raw_key_source.h"
#include "packager/media/base/request_signer.h"
#include "packager/media/base/widevine_key_source.h"
#include "packager/mpd/base/mpd_options.h"
#include "packager/status.h"
#include <absl/log/log.h>
#include <packager/file.h>
#include <packager/media/base/media_handler.h>
#include <packager/media/base/muxer_options.h>
#include <packager/media/base/playready_key_source.h>
#include <packager/media/base/raw_key_source.h>
#include <packager/media/base/request_signer.h>
#include <packager/media/base/widevine_key_source.h>
#include <packager/mpd/base/mpd_options.h>
namespace shaka {
namespace media {

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -12,7 +12,7 @@
#include <memory>
#include <vector>
#include "packager/media/base/fourccs.h"
#include <packager/media/base/fourccs.h>
namespace shaka {

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,16 +6,22 @@
//
// Defines command line flags for PlayReady encryption.
#include "packager/app/playready_key_encryption_flags.h"
#include <packager/app/playready_key_encryption_flags.h>
#include "packager/app/validate_flag.h"
#include <packager/app/validate_flag.h>
DEFINE_bool(enable_playready_encryption,
false,
"Enable encryption with PlayReady key.");
DEFINE_string(playready_server_url, "", "PlayReady packaging server url.");
DEFINE_string(program_identifier, "",
"Program identifier for packaging request.");
ABSL_FLAG(bool,
enable_playready_encryption,
false,
"Enable encryption with PlayReady key.");
ABSL_FLAG(std::string,
playready_server_url,
"",
"PlayReady packaging server url.");
ABSL_FLAG(std::string,
program_identifier,
"",
"Program identifier for packaging request.");
namespace shaka {
namespace {
@ -26,13 +32,15 @@ bool ValidatePRCryptoFlags() {
bool success = true;
const char playready_label[] = "--enable_playready_encryption";
bool playready_enabled = FLAGS_enable_playready_encryption;
if (!ValidateFlag("playready_server_url", FLAGS_playready_server_url,
bool playready_enabled = absl::GetFlag(FLAGS_enable_playready_encryption);
if (!ValidateFlag("playready_server_url",
absl::GetFlag(FLAGS_playready_server_url),
playready_enabled, !kFlagIsOptional, playready_label)) {
success = false;
}
if (!ValidateFlag("program_identifier", FLAGS_program_identifier,
playready_enabled, !kFlagIsOptional, playready_label)) {
if (!ValidateFlag("program_identifier",
absl::GetFlag(FLAGS_program_identifier), playready_enabled,
!kFlagIsOptional, playready_label)) {
success = false;
}
return success;

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -9,13 +9,12 @@
#ifndef APP_PLAYREADY_KEY_ENCRYPTION_FLAGS_H_
#define APP_PLAYREADY_KEY_ENCRYPTION_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
#include "packager/app/gflags_hex_bytes.h"
DECLARE_bool(enable_playready_encryption);
DECLARE_string(playready_server_url);
DECLARE_string(program_identifier);
ABSL_DECLARE_FLAG(bool, enable_playready_encryption);
ABSL_DECLARE_FLAG(std::string, playready_server_url);
ABSL_DECLARE_FLAG(std::string, program_identifier);
namespace shaka {

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,9 +6,10 @@
//
// Defines command line flags for protection systems.
#include "packager/app/protection_system_flags.h"
#include <packager/app/protection_system_flags.h>
DEFINE_string(
ABSL_FLAG(
std::string,
protection_systems,
"",
"Protection systems to be generated. Supported protection systems include "

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -9,8 +9,9 @@
#ifndef PACKAGER_APP_PROTECTION_SYSTEM_FLAGS_H_
#define PACKAGER_APP_PROTECTION_SYSTEM_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_string(protection_systems);
ABSL_DECLARE_FLAG(std::string, protection_systems);
#endif // PACKAGER_APP_PROTECTION_SYSTEM_FLAGS_H_

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,93 +6,103 @@
//
// Defines command line flags for raw key encryption/decryption.
#include "packager/app/raw_key_encryption_flags.h"
#include <packager/app/validate_flag.h>
#include <packager/utils/absl_flag_hexbytes.h>
#include "packager/app/validate_flag.h"
DEFINE_bool(enable_fixed_key_encryption,
false,
"Same as --enable_raw_key_encryption. Will be deprecated.");
DEFINE_bool(enable_fixed_key_decryption,
false,
"Same as --enable_raw_key_decryption. Will be deprecated.");
DEFINE_bool(enable_raw_key_encryption,
false,
"Enable encryption with raw key (key provided in command line).");
DEFINE_bool(enable_raw_key_decryption,
false,
"Enable decryption with raw key (key provided in command line).");
DEFINE_hex_bytes(
key_id,
"",
"Key id in hex string format. Will be deprecated. Use --keys.");
DEFINE_hex_bytes(key,
"",
"Key in hex string format. Will be deprecated. Use --keys.");
DEFINE_string(keys,
"",
"A list of key information in the form of label=<drm "
"label>:key_id=<32-digit key id in hex>:key=<32-digit key in "
"hex>,label=...");
DEFINE_hex_bytes(
iv,
"",
"IV in hex string format. If not specified, a random IV will be "
"generated. This flag should only be used for testing.");
DEFINE_hex_bytes(
pssh,
"",
"One or more PSSH boxes in hex string format. If not specified, "
"will generate a v1 common PSSH box as specified in "
"https://goo.gl/s8RIhr.");
ABSL_FLAG(bool,
enable_fixed_key_encryption,
false,
"Same as --enable_raw_key_encryption. Will be deprecated.");
ABSL_FLAG(bool,
enable_fixed_key_decryption,
false,
"Same as --enable_raw_key_decryption. Will be deprecated.");
ABSL_FLAG(bool,
enable_raw_key_encryption,
false,
"Enable encryption with raw key (key provided in command line).");
ABSL_FLAG(bool,
enable_raw_key_decryption,
false,
"Enable decryption with raw key (key provided in command line).");
ABSL_FLAG(shaka::HexBytes,
key_id,
{},
"Key id in hex string format. Will be deprecated. Use --keys.");
ABSL_FLAG(shaka::HexBytes,
key,
{},
"Key in hex string format. Will be deprecated. Use --keys.");
ABSL_FLAG(std::string,
keys,
"",
"A list of key information in the form of label=<drm "
"label>:key_id=<32-digit key id in hex>:key=<32-digit key in "
"hex>,label=...");
ABSL_FLAG(shaka::HexBytes,
iv,
{},
"IV in hex string format. If not specified, a random IV will be "
"generated. This flag should only be used for testing.");
ABSL_FLAG(shaka::HexBytes,
pssh,
{},
"One or more PSSH boxes in hex string format. If not specified, "
"will generate a v1 common PSSH box as specified in "
"https://goo.gl/s8RIhr.");
namespace shaka {
bool ValidateRawKeyCryptoFlags() {
bool success = true;
if (FLAGS_enable_fixed_key_encryption)
FLAGS_enable_raw_key_encryption = true;
if (FLAGS_enable_fixed_key_decryption)
FLAGS_enable_raw_key_decryption = true;
if (FLAGS_enable_fixed_key_encryption || FLAGS_enable_fixed_key_decryption) {
if (absl::GetFlag(FLAGS_enable_fixed_key_encryption))
absl::SetFlag(&FLAGS_enable_raw_key_encryption, true);
if (absl::GetFlag(FLAGS_enable_fixed_key_decryption))
absl::SetFlag(&FLAGS_enable_raw_key_decryption, true);
if (absl::GetFlag(FLAGS_enable_fixed_key_encryption) ||
absl::GetFlag(FLAGS_enable_fixed_key_decryption)) {
PrintWarning(
"--enable_fixed_key_encryption and --enable_fixed_key_decryption are "
"going to be deprecated. Please switch to --enable_raw_key_encryption "
"and --enable_raw_key_decryption as soon as possible.");
}
const bool raw_key_crypto =
FLAGS_enable_raw_key_encryption || FLAGS_enable_raw_key_decryption;
const bool raw_key_crypto = absl::GetFlag(FLAGS_enable_raw_key_encryption) ||
absl::GetFlag(FLAGS_enable_raw_key_decryption);
const char raw_key_crypto_label[] = "--enable_raw_key_encryption/decryption";
// --key_id and --key are associated with --enable_raw_key_encryption and
// --enable_raw_key_decryption.
if (FLAGS_keys.empty()) {
if (!ValidateFlag("key_id", FLAGS_key_id_bytes, raw_key_crypto, false,
raw_key_crypto_label)) {
if (absl::GetFlag(FLAGS_keys).empty()) {
if (!ValidateFlag("key_id", absl::GetFlag(FLAGS_key_id).bytes,
raw_key_crypto, false, raw_key_crypto_label)) {
success = false;
}
if (!ValidateFlag("key", FLAGS_key_bytes, raw_key_crypto, false,
raw_key_crypto_label)) {
if (!ValidateFlag("key", absl::GetFlag(FLAGS_key).bytes, raw_key_crypto,
false, raw_key_crypto_label)) {
success = false;
}
if (success && (!FLAGS_key_id_bytes.empty() || !FLAGS_key_bytes.empty())) {
if (success && (!absl::GetFlag(FLAGS_key_id).bytes.empty() ||
!absl::GetFlag(FLAGS_key).bytes.empty())) {
PrintWarning(
"--key_id and --key are going to be deprecated. Please switch to "
"--keys as soon as possible.");
}
} else {
if (!FLAGS_key_id_bytes.empty() || !FLAGS_key_bytes.empty()) {
if (!absl::GetFlag(FLAGS_key_id).bytes.empty() ||
!absl::GetFlag(FLAGS_key).bytes.empty()) {
PrintError("--key_id or --key cannot be used together with --keys.");
success = false;
}
}
if (!ValidateFlag("iv", FLAGS_iv_bytes, FLAGS_enable_raw_key_encryption, true,
if (!ValidateFlag("iv", absl::GetFlag(FLAGS_iv).bytes,
absl::GetFlag(FLAGS_enable_raw_key_encryption), true,
"--enable_raw_key_encryption")) {
success = false;
}
if (!FLAGS_iv_bytes.empty()) {
if (FLAGS_iv_bytes.size() != 8 && FLAGS_iv_bytes.size() != 16) {
if (!absl::GetFlag(FLAGS_iv).bytes.empty()) {
if (absl::GetFlag(FLAGS_iv).bytes.size() != 8 &&
absl::GetFlag(FLAGS_iv).bytes.size() != 16) {
PrintError(
"--iv should be either 8 bytes (16 hex digits) or 16 bytes (32 hex "
"digits).");
@ -101,8 +111,9 @@ bool ValidateRawKeyCryptoFlags() {
}
// --pssh is associated with --enable_raw_key_encryption.
if (!ValidateFlag("pssh", FLAGS_pssh_bytes, FLAGS_enable_raw_key_encryption,
true, "--enable_raw_key_encryption")) {
if (!ValidateFlag("pssh", absl::GetFlag(FLAGS_pssh).bytes,
absl::GetFlag(FLAGS_enable_raw_key_encryption), true,
"--enable_raw_key_encryption")) {
success = false;
}
return success;

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -9,17 +9,18 @@
#ifndef PACKAGER_APP_RAW_KEY_ENCRYPTION_FLAGS_H_
#define PACKAGER_APP_RAW_KEY_ENCRYPTION_FLAGS_H_
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
#include "packager/app/gflags_hex_bytes.h"
#include <packager/utils/absl_flag_hexbytes.h>
DECLARE_bool(enable_raw_key_encryption);
DECLARE_bool(enable_raw_key_decryption);
DECLARE_hex_bytes(key_id);
DECLARE_hex_bytes(key);
DECLARE_string(keys);
DECLARE_hex_bytes(iv);
DECLARE_hex_bytes(pssh);
ABSL_DECLARE_FLAG(bool, enable_raw_key_encryption);
ABSL_DECLARE_FLAG(bool, enable_raw_key_decryption);
ABSL_DECLARE_FLAG(shaka::HexBytes, key_id);
ABSL_DECLARE_FLAG(shaka::HexBytes, key);
ABSL_DECLARE_FLAG(std::string, keys);
ABSL_DECLARE_FLAG(shaka::HexBytes, iv);
ABSL_DECLARE_FLAG(shaka::HexBytes, pssh);
namespace shaka {

View File

@ -1,4 +1,4 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -7,46 +7,56 @@
// Defines retired / deprecated flags. These flags will be removed in later
// versions.
#include "packager/app/retired_flags.h"
#include <packager/app/retired_flags.h>
#include <stdio.h>
#include <cstdio>
DEFINE_string(profile, "", "This flag is deprecated. Do not use.");
DEFINE_bool(single_segment, true, "This flag is deprecated. Do not use.");
DEFINE_bool(webm_subsample_encryption,
true,
"This flag is deprecated. Use vp9_subsample_encryption instead.");
DEFINE_double(availability_time_offset,
0,
"This flag is deprecated. Use suggested_presentation_delay "
"instead which can achieve similar effect.");
DEFINE_string(playready_key_id,
"",
"This flag is deprecated. Use --enable_raw_key_encryption with "
"--generate_playready_pssh to generate PlayReady PSSH.");
DEFINE_string(playready_key,
"",
"This flag is deprecated. Use --enable_raw_key_encryption with "
"--generate_playready_pssh to generate PlayReady PSSH.");
DEFINE_bool(mp4_use_decoding_timestamp_in_timeline,
false,
"This flag is deprecated. Do not use.");
DEFINE_int32(
ABSL_FLAG(std::string, profile, "", "This flag is deprecated. Do not use.");
ABSL_FLAG(bool, single_segment, true, "This flag is deprecated. Do not use.");
ABSL_FLAG(bool,
webm_subsample_encryption,
true,
"This flag is deprecated. Use vp9_subsample_encryption instead.");
ABSL_FLAG(double,
availability_time_offset,
0,
"This flag is deprecated. Use suggested_presentation_delay "
"instead which can achieve similar effect.");
ABSL_FLAG(std::string,
playready_key_id,
"",
"This flag is deprecated. Use --enable_raw_key_encryption with "
"--generate_playready_pssh to generate PlayReady PSSH.");
ABSL_FLAG(std::string,
playready_key,
"",
"This flag is deprecated. Use --enable_raw_key_encryption with "
"--generate_playready_pssh to generate PlayReady PSSH.");
ABSL_FLAG(bool,
mp4_use_decoding_timestamp_in_timeline,
false,
"This flag is deprecated. Do not use.");
ABSL_FLAG(
int32_t,
num_subsegments_per_sidx,
0,
"This flag is deprecated. Use --generate_sidx_in_media_segments instead.");
DEFINE_bool(generate_widevine_pssh,
false,
"This flag is deprecated. Use --protection_systems instead.");
DEFINE_bool(generate_playready_pssh,
false,
"This flag is deprecated. Use --protection_systems instead.");
DEFINE_bool(generate_common_pssh,
false,
"This flag is deprecated. Use --protection_systems instead.");
DEFINE_bool(generate_static_mpd,
false,
"This flag is deprecated. Use --generate_static_live_mpd instead.");
ABSL_FLAG(bool,
generate_widevine_pssh,
false,
"This flag is deprecated. Use --protection_systems instead.");
ABSL_FLAG(bool,
generate_playready_pssh,
false,
"This flag is deprecated. Use --protection_systems instead.");
ABSL_FLAG(bool,
generate_common_pssh,
false,
"This flag is deprecated. Use --protection_systems instead.");
ABSL_FLAG(bool,
generate_static_mpd,
false,
"This flag is deprecated. Use --generate_static_live_mpd instead.");
// The current gflags library does not provide a way to check whether a flag is
// set in command line. If a flag has a different value to its default value,
@ -102,16 +112,70 @@ bool InformRetiredGenerateStaticMpdFlag(const char* flagname, bool value) {
return true;
}
DEFINE_validator(profile, &InformRetiredStringFlag);
DEFINE_validator(single_segment, &InformRetiredDefaultTrueFlag);
DEFINE_validator(webm_subsample_encryption, &InformRetiredDefaultTrueFlag);
DEFINE_validator(availability_time_offset, &InformRetiredDefaultDoubleFlag);
DEFINE_validator(playready_key_id, &InformRetiredStringFlag);
DEFINE_validator(playready_key, &InformRetiredStringFlag);
DEFINE_validator(mp4_use_decoding_timestamp_in_timeline,
&InformRetiredDefaultFalseFlag);
DEFINE_validator(num_subsegments_per_sidx, &InformRetiredDefaultInt32Flag);
DEFINE_validator(generate_widevine_pssh, &InformRetiredPsshGenerationFlag);
DEFINE_validator(generate_playready_pssh, &InformRetiredPsshGenerationFlag);
DEFINE_validator(generate_common_pssh, &InformRetiredPsshGenerationFlag);
DEFINE_validator(generate_static_mpd, &InformRetiredGenerateStaticMpdFlag);
namespace shaka {
bool ValidateRetiredFlags() {
bool success = true;
auto profile = absl::GetFlag(FLAGS_profile);
if (!InformRetiredStringFlag("profile", profile)) {
success = false;
}
auto single_segment = absl::GetFlag(FLAGS_single_segment);
if (!InformRetiredDefaultTrueFlag("single_segment", single_segment)) {
success = false;
}
auto webm_subsample_encryption =
absl::GetFlag(FLAGS_webm_subsample_encryption);
if (!InformRetiredDefaultTrueFlag("webm_subsample_encryption",
webm_subsample_encryption)) {
success = false;
}
auto availability_time_offset = absl::GetFlag(FLAGS_availability_time_offset);
if (!InformRetiredDefaultDoubleFlag("availability_time_offset",
availability_time_offset)) {
success = false;
}
auto playready_key_id = absl::GetFlag(FLAGS_playready_key_id);
if (!InformRetiredStringFlag("playready_key_id", playready_key_id)) {
success = false;
}
auto playready_key = absl::GetFlag(FLAGS_playready_key);
if (!InformRetiredStringFlag("playready_key", playready_key)) {
success = false;
}
auto mp4_use_decoding_timestamp_in_timeline =
absl::GetFlag(FLAGS_mp4_use_decoding_timestamp_in_timeline);
if (!InformRetiredDefaultFalseFlag("mp4_use_decoding_timestamp_in_timeline",
mp4_use_decoding_timestamp_in_timeline)) {
success = false;
}
auto num_subsegments_per_sidx = absl::GetFlag(FLAGS_num_subsegments_per_sidx);
if (!InformRetiredDefaultInt32Flag("num_subsegments_per_sidx",
num_subsegments_per_sidx)) {
success = false;
}
auto generate_widevine_pssh = absl::GetFlag(FLAGS_generate_widevine_pssh);
if (!InformRetiredPsshGenerationFlag("generate_widevine_pssh",
generate_widevine_pssh)) {
success = false;
}
auto generate_playready_pssh = absl::GetFlag(FLAGS_generate_playready_pssh);
if (!InformRetiredPsshGenerationFlag("generate_playready_pssh",
generate_playready_pssh)) {
success = false;
}
auto generate_common_pssh = absl::GetFlag(FLAGS_generate_common_pssh);
if (!InformRetiredPsshGenerationFlag("generate_common_pssh",
generate_common_pssh)) {
success = false;
}
auto generate_static_mpd = absl::GetFlag(FLAGS_generate_static_mpd);
if (!InformRetiredGenerateStaticMpdFlag("generate_static_mpd",
generate_static_mpd)) {
success = false;
}
return success;
}
} // namespace shaka

View File

@ -1,19 +1,24 @@
// Copyright 2017 Google Inc. All rights reserved.
// Copyright 2017 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include <gflags/gflags.h>
#include <absl/flags/declare.h>
#include <absl/flags/flag.h>
DECLARE_string(profile);
DECLARE_bool(single_segment);
DECLARE_bool(webm_subsample_encryption);
DECLARE_double(availability_time_offset);
DECLARE_string(playready_key_id);
DECLARE_string(playready_key);
DECLARE_bool(mp4_use_decoding_timestamp_in_timeline);
DECLARE_int32(num_subsegments_per_sidx);
DECLARE_bool(generate_widevine_pssh);
DECLARE_bool(generate_playready_pssh);
DECLARE_bool(generate_common_pssh);
ABSL_DECLARE_FLAG(std::string, profile);
ABSL_DECLARE_FLAG(bool, single_segment);
ABSL_DECLARE_FLAG(bool, webm_subsample_encryption);
ABSL_DECLARE_FLAG(double, availability_time_offset);
ABSL_DECLARE_FLAG(std::string, playready_key_id);
ABSL_DECLARE_FLAG(std::string, playready_key);
ABSL_DECLARE_FLAG(bool, mp4_use_decoding_timestamp_in_timeline);
ABSL_DECLARE_FLAG(int32_t, num_subsegments_per_sidx);
ABSL_DECLARE_FLAG(bool, generate_widevine_pssh);
ABSL_DECLARE_FLAG(bool, generate_playready_pssh);
ABSL_DECLARE_FLAG(bool, generate_common_pssh);
namespace shaka {
bool ValidateRetiredFlags();
}

View File

@ -1,13 +1,13 @@
// Copyright 2020 Google LLLC All rights reserved.
// Copyright 2020 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/single_thread_job_manager.h"
#include <packager/app/single_thread_job_manager.h>
#include "packager/media/chunking/sync_point_queue.h"
#include "packager/media/origin/origin_handler.h"
#include <packager/media/chunking/sync_point_queue.h>
#include <packager/media/origin/origin_handler.h>
namespace shaka {
namespace media {
@ -16,17 +16,12 @@ SingleThreadJobManager::SingleThreadJobManager(
std::unique_ptr<SyncPointQueue> sync_points)
: JobManager(std::move(sync_points)) {}
Status SingleThreadJobManager::InitializeJobs() {
Status status;
for (const JobEntry& job_entry : job_entries_)
status.Update(job_entry.worker->Initialize());
return status;
}
Status SingleThreadJobManager::RunJobs() {
Status status;
for (const JobEntry& job_entry : job_entries_)
status.Update(job_entry.worker->Run());
for (auto& job : jobs_)
status.Update(job->Run());
return status;
}

View File

@ -1,4 +1,4 @@
// Copyright 2020 Google LLLC All rights reserved.
// Copyright 2020 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -9,7 +9,7 @@
#include <memory>
#include "packager/app/job_manager.h"
#include <packager/app/job_manager.h>
namespace shaka {
namespace media {
@ -22,7 +22,7 @@ class SingleThreadJobManager : public JobManager {
// fails or is cancelled. It can be NULL.
explicit SingleThreadJobManager(std::unique_ptr<SyncPointQueue> sync_points);
Status InitializeJobs() override;
// Run all registered jobs serially in this thread.
Status RunJobs() override;
};

View File

@ -1,14 +1,17 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/app/stream_descriptor.h"
#include <packager/app/stream_descriptor.h>
#include "packager/base/logging.h"
#include "packager/base/strings/string_number_conversions.h"
#include "packager/base/strings/string_split.h"
#include <absl/log/log.h>
#include <absl/strings/numbers.h>
#include <absl/strings/str_split.h>
#include <packager/kv_pairs/kv_pairs.h>
#include <packager/utils/string_trim_split.h>
namespace shaka {
@ -86,7 +89,7 @@ const FieldNameToTypeMapping kFieldNameTypeMappings[] = {
};
FieldType GetFieldType(const std::string& field_name) {
for (size_t idx = 0; idx < arraysize(kFieldNameTypeMappings); ++idx) {
for (size_t idx = 0; idx < std::size(kFieldNameTypeMappings); ++idx) {
if (field_name == kFieldNameTypeMappings[idx].field_name)
return kFieldNameTypeMappings[idx].field_type;
}
@ -95,162 +98,162 @@ FieldType GetFieldType(const std::string& field_name) {
} // anonymous namespace
base::Optional<StreamDescriptor> ParseStreamDescriptor(
std::optional<StreamDescriptor> ParseStreamDescriptor(
const std::string& descriptor_string) {
StreamDescriptor descriptor;
// Split descriptor string into name/value pairs.
base::StringPairs pairs;
if (!base::SplitStringIntoKeyValuePairs(descriptor_string, '=', ',',
&pairs)) {
std::vector<KVPair> kv_pairs =
SplitStringIntoKeyValuePairs(descriptor_string, '=', ',');
if (kv_pairs.empty()) {
LOG(ERROR) << "Invalid stream descriptors name/value pairs: "
<< descriptor_string;
return base::nullopt;
return std::nullopt;
}
for (base::StringPairs::const_iterator iter = pairs.begin();
iter != pairs.end(); ++iter) {
switch (GetFieldType(iter->first)) {
std::vector<absl::string_view> tokens;
for (const auto& pair : kv_pairs) {
switch (GetFieldType(pair.first)) {
case kStreamSelectorField:
descriptor.stream_selector = iter->second;
descriptor.stream_selector = pair.second;
break;
case kInputField:
descriptor.input = iter->second;
descriptor.input = pair.second;
break;
case kOutputField:
descriptor.output = iter->second;
descriptor.output = pair.second;
break;
case kSegmentTemplateField:
descriptor.segment_template = iter->second;
descriptor.segment_template = pair.second;
break;
case kBandwidthField: {
unsigned bw;
if (!base::StringToUint(iter->second, &bw)) {
if (!absl::SimpleAtoi(pair.second, &bw)) {
LOG(ERROR) << "Non-numeric bandwidth specified.";
return base::nullopt;
return std::nullopt;
}
descriptor.bandwidth = bw;
break;
}
case kLanguageField: {
descriptor.language = iter->second;
descriptor.language = pair.second;
break;
}
case kCcIndexField: {
unsigned index;
if (!base::StringToUint(iter->second, &index)) {
if (!absl::SimpleAtoi(pair.second, &index)) {
LOG(ERROR) << "Non-numeric cc_index specified.";
return base::nullopt;
return std::nullopt;
}
descriptor.cc_index = index;
break;
}
case kOutputFormatField: {
descriptor.output_format = iter->second;
descriptor.output_format = pair.second;
break;
}
case kHlsNameField: {
descriptor.hls_name = iter->second;
descriptor.hls_name = pair.second;
break;
}
case kHlsGroupIdField: {
descriptor.hls_group_id = iter->second;
descriptor.hls_group_id = pair.second;
break;
}
case kHlsPlaylistNameField: {
descriptor.hls_playlist_name = iter->second;
descriptor.hls_playlist_name = pair.second;
break;
}
case kHlsIframePlaylistNameField: {
descriptor.hls_iframe_playlist_name = iter->second;
descriptor.hls_iframe_playlist_name = pair.second;
break;
}
case kTrickPlayFactorField: {
unsigned factor;
if (!base::StringToUint(iter->second, &factor)) {
LOG(ERROR) << "Non-numeric trick play factor " << iter->second
if (!absl::SimpleAtoi(pair.second, &factor)) {
LOG(ERROR) << "Non-numeric trick play factor " << pair.second
<< " specified.";
return base::nullopt;
return std::nullopt;
}
if (factor == 0) {
LOG(ERROR) << "Stream trick_play_factor should be > 0.";
return base::nullopt;
return std::nullopt;
}
descriptor.trick_play_factor = factor;
break;
}
case kSkipEncryptionField: {
unsigned skip_encryption_value;
if (!base::StringToUint(iter->second, &skip_encryption_value)) {
if (!absl::SimpleAtoi(pair.second, &skip_encryption_value)) {
LOG(ERROR) << "Non-numeric option for skip encryption field "
"specified (" << iter->second << ").";
return base::nullopt;
"specified ("
<< pair.second << ").";
return std::nullopt;
}
if (skip_encryption_value > 1) {
LOG(ERROR) << "skip_encryption should be either 0 or 1.";
return base::nullopt;
return std::nullopt;
}
descriptor.skip_encryption = skip_encryption_value > 0;
break;
}
case kDrmStreamLabelField:
descriptor.drm_label = iter->second;
descriptor.drm_label = pair.second;
break;
case kHlsCharacteristicsField:
descriptor.hls_characteristics =
base::SplitString(iter->second, ";:", base::TRIM_WHITESPACE,
base::SPLIT_WANT_NONEMPTY);
SplitAndTrimSkipEmpty(pair.second, ';');
break;
case kDashAccessiblitiesField:
case kDashAccessiblitiesField: {
descriptor.dash_accessiblities =
base::SplitString(iter->second, ";", base::TRIM_WHITESPACE,
base::SPLIT_WANT_NONEMPTY);
SplitAndTrimSkipEmpty(pair.second, ';');
for (const std::string& accessibility :
descriptor.dash_accessiblities) {
size_t pos = accessibility.find('=');
if (pos == std::string::npos) {
LOG(ERROR)
<< "Accessibility should be in scheme=value format, but seeing "
<< accessibility;
return base::nullopt;
LOG(ERROR) << "Accessibility should be in scheme=value format, "
"but seeing "
<< accessibility;
return std::nullopt;
}
}
break;
} break;
case kDashRolesField:
descriptor.dash_roles =
base::SplitString(iter->second, ";", base::TRIM_WHITESPACE,
base::SPLIT_WANT_NONEMPTY);
descriptor.dash_roles = SplitAndTrimSkipEmpty(pair.second, ';');
break;
case kDashOnlyField:
unsigned dash_only_value;
if (!base::StringToUint(iter->second, &dash_only_value)) {
if (!absl::SimpleAtoi(pair.second, &dash_only_value)) {
LOG(ERROR) << "Non-numeric option for dash_only field "
"specified (" << iter->second << ").";
return base::nullopt;
"specified ("
<< pair.second << ").";
return std::nullopt;
}
if (dash_only_value > 1) {
LOG(ERROR) << "dash_only should be either 0 or 1.";
return base::nullopt;
return std::nullopt;
}
descriptor.dash_only = dash_only_value > 0;
break;
case kHlsOnlyField:
unsigned hls_only_value;
if (!base::StringToUint(iter->second, &hls_only_value)) {
if (!absl::SimpleAtoi(pair.second, &hls_only_value)) {
LOG(ERROR) << "Non-numeric option for hls_only field "
"specified (" << iter->second << ").";
return base::nullopt;
"specified ("
<< pair.second << ").";
return std::nullopt;
}
if (hls_only_value > 1) {
LOG(ERROR) << "hls_only should be either 0 or 1.";
return base::nullopt;
return std::nullopt;
}
descriptor.hls_only = hls_only_value > 0;
break;
default:
LOG(ERROR) << "Unknown field in stream descriptor (\"" << iter->first
LOG(ERROR) << "Unknown field in stream descriptor (\"" << pair.first
<< "\").";
return base::nullopt;
return std::nullopt;
}
}
return descriptor;

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -7,10 +7,10 @@
#ifndef APP_STREAM_DESCRIPTOR_H_
#define APP_STREAM_DESCRIPTOR_H_
#include <optional>
#include <string>
#include "packager/base/optional.h"
#include "packager/packager.h"
#include <packager/packager.h>
namespace shaka {
@ -21,7 +21,7 @@ namespace shaka {
/// @param descriptor_list is a pointer to the sorted descriptor list into
/// which the new descriptor should be inserted.
/// @return true if successful, false otherwise. May print error messages.
base::Optional<StreamDescriptor> ParseStreamDescriptor(
std::optional<StreamDescriptor> ParseStreamDescriptor(
const std::string& descriptor_string);
} // namespace shaka

View File

@ -1,4 +1,4 @@
# Copyright 2014 Google Inc. All Rights Reserved.
# Copyright 2014 Google LLC. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at

View File

@ -1,6 +1,6 @@
#!/usr/bin/python3
#
# Copyright 2014 Google Inc. All Rights Reserved.
# Copyright 2014 Google LLC. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at

View File

@ -1,4 +1,4 @@
# Copyright 2014 Google Inc. All Rights Reserved.
# Copyright 2014 Google LLC. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,9 +6,9 @@
//
// Flag validation help functions.
#include "packager/app/validate_flag.h"
#include <packager/app/validate_flag.h>
#include <stdio.h>
#include <cstdio>
namespace shaka {

View File

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All rights reserved.
// Copyright 2014 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,12 +6,12 @@
//
// Flag validation help functions.
#ifndef APP_VALIDATE_FLAG_H_
#define APP_VALIDATE_FLAG_H_
#ifndef PACKAGER_APP_VALIDATE_FLAG_H_
#define PACKAGER_APP_VALIDATE_FLAG_H_
#include <string>
#include "packager/base/strings/stringprintf.h"
#include <absl/strings/str_format.h>
namespace shaka {
@ -41,13 +41,12 @@ bool ValidateFlag(const char* flag_name,
const char* label) {
if (flag_value.empty()) {
if (!optional && condition) {
PrintError(
base::StringPrintf("--%s is required if %s.", flag_name, label));
PrintError(absl::StrFormat("--%s is required if %s.", flag_name, label));
return false;
}
} else if (!condition) {
PrintError(base::StringPrintf(
"--%s should be specified only if %s.", flag_name, label));
PrintError(absl::StrFormat("--%s should be specified only if %s.",
flag_name, label));
return false;
}
return true;
@ -55,4 +54,4 @@ bool ValidateFlag(const char* flag_name,
} // namespace shaka
#endif // APP_VALIDATE_FLAG_H_
#endif // PACKAGER_APP_VALIDATE_FLAG_H_

View File

@ -1,4 +1,4 @@
// Copyright 2015 Google Inc. All rights reserved.
// Copyright 2015 Google LLC. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
@ -6,21 +6,67 @@
//
// Defines verbose logging flags.
#include "packager/app/vlog_flags.h"
#include <packager/app/vlog_flags.h>
DEFINE_int32(v,
0,
"Show all VLOG(m) or DVLOG(m) messages for m <= this. "
"Overridable by --vmodule.");
DEFINE_string(
#include <absl/log/globals.h>
#include <absl/log/log.h>
#include <absl/strings/numbers.h>
#include <packager/kv_pairs/kv_pairs.h>
#include <packager/macros/logging.h>
ABSL_FLAG(int,
v,
0,
"Show all VLOG(m) or DVLOG(m) messages for m <= this. "
"Overridable by --vmodule.");
ABSL_FLAG(
std::string,
vmodule,
"",
"Per-module verbose level."
"Per-module verbose level. THIS FLAG IS DEPRECATED. "
"Argument is a comma-separated list of <module name>=<log level>. "
"<module name> is a glob pattern, matched against the filename base "
"(that is, name ignoring .cc/.h./-inl.h). "
"A pattern without slashes matches just the file name portion, otherwise "
"the whole file path (still without .cc/.h./-inl.h) is matched. "
"? and * in the glob pattern match any single or sequence of characters "
"respectively including slashes. "
"<log level> overrides any value given by --v.");
"The logging system no longer supports different levels for different "
"modules, so the verbosity level will be set to the maximum specified for "
"any module or given by --v.");
ABSL_DECLARE_FLAG(int, minloglevel);
namespace shaka {
void handle_vlog_flags() {
// Reference the log level flag to keep the absl::log flags from getting
// stripped from the executable.
int log_level = absl::GetFlag(FLAGS_minloglevel);
(void)log_level;
int vlog_level = absl::GetFlag(FLAGS_v);
std::string vmodule_patterns = absl::GetFlag(FLAGS_vmodule);
if (!vmodule_patterns.empty()) {
std::vector<KVPair> patterns =
SplitStringIntoKeyValuePairs(vmodule_patterns, '=', ',');
int pattern_vlevel;
bool warning_shown = false;
for (const auto& pattern : patterns) {
if (!warning_shown) {
LOG(WARNING) << "--vmodule ignored, combined with --v!";
warning_shown = true;
}
if (!::absl::SimpleAtoi(pattern.second, &pattern_vlevel)) {
LOG(ERROR) << "Error parsing log level for '" << pattern.first
<< "' from '" << pattern.second << "'";
continue;
}
}
}
if (vlog_level != 0) {
absl::SetMinLogLevel(static_cast<absl::LogSeverityAtLeast>(-vlog_level));
}
}
} // namespace shaka

Some files were not shown because too many files have changed in this diff Show More