Merge branch 'redesign'

This commit is contained in:
Eloston
2018-08-16 07:02:02 +00:00
310 changed files with 2757 additions and 2878 deletions

25
.cirrus.yml Normal file
View File

@@ -0,0 +1,25 @@
container:
image: python:3.5-slim
code_check_task:
pip_cache:
folder: ~/.cache/pip
populate_script: pip install pylint requests yapf
pip_install_script: pip install pylint requests yapf
yapf_script:
- python3 -m yapf --style '.style.yapf' -e '*/third_party/*' -rpd buildkit
- python3 -m yapf --style '.style.yapf' -rpd devutils
pylint_script:
- ./devutils/pylint_buildkit.py --hide-fixme
- ./devutils/pylint_devutils.py --hide-fixme devutils
validate_config_task:
validate_config_script: ./devutils/validate_config.py
validate_patches_task:
pip_cache:
folder: ~/.cache/pip
populate_script: pip install requests
pip_install_script: pip install requests
validate_patches_script:
- ./devutils/validate_patches.py -r

6
.gitignore vendored
View File

@@ -2,9 +2,9 @@
__pycache__/
*.py[cod]
# Ignore buildspace directory
/buildspace
# Ignore macOS Finder meta
.DS_Store
.tm_properties
# Ignore optional build directory
/build

8
.style.yapf Normal file
View File

@@ -0,0 +1,8 @@
[style]
based_on_style = pep8
allow_split_before_dict_value = false
coalesce_brackets = true
column_limit = 100
indent_width = 4
join_multiple_lines = true
spaces_before_comment = 1

View File

@@ -1,9 +0,0 @@
language: python
python:
- "3.5"
install:
- pip install pylint
script:
- ./developer_utilities/validate_config.py
- ./developer_utilities/pylint_buildkit.py --hide-fixme
- ./developer_utilities/pylint_devutils.py --hide-fixme developer_utilities/

1
FAQ.md
View File

@@ -1 +0,0 @@
[**The FAQ has moved to the new Wiki**](https://ungoogled-software.github.io/ungoogled-chromium-wiki/faq)

View File

@@ -1,6 +1,8 @@
# ungoogled-chromium
**Modifications to Google Chromium for removing Google integration and enhancing privacy, control, and transparency**
*Bringing back the "Don't" in "Don't be evil"*
**ungoogled-chromium is Google Chromium**, sans integration with Google. It also features some changes to enhance privacy, control, and transparency.
## Motivation and Description
@@ -21,7 +23,7 @@ Since these goals and requirements are not precise, unclear situations are discu
* [Features](#features)
* [Supported platforms and distributions](#supported-platforms-and-distributions)
* [Download pre-built packages](#download-pre-built-packages)
* [**Download pre-built packages**](#download-pre-built-packages)
* [Getting the source code](#getting-the-source-code)
* [Frequently-asked questions](#frequently-asked-questions)
* [Design and implementation](#design-and-implementation)
@@ -40,8 +42,8 @@ ungoogled-chromium selectively borrows many of its features from the following:
* [Iridium Browser](//iridiumbrowser.de/)
Most of the **additional** features are as follows:
* Replace many web domains in the source code with non-existent alternatives ending in `qjz9zk` (known as domain substitution; [see DESIGN.md](DESIGN.md#source-file-processors))
* Strip binaries from the source code (known as binary pruning; [see DESIGN.md](DESIGN.md#source-file-processors))
* Replace many web domains in the source code with non-existent alternatives ending in `qjz9zk` (known as domain substitution; [see docs/design.md](docs/design.md#source-file-processors))
* Strip binaries from the source code (known as binary pruning; [see docs/design.md](docs/design.md#source-file-processors))
* Disable functionality specific to Google domains (e.g. Google Host Detector, Google URL Tracker, Google Cloud Messaging, Google Hotwording, etc.)
* Add Omnibox search provider "No Search" to allow disabling of searching
* Disable automatic formatting of URLs in Omnibox (e.g. stripping `http://`, hiding certain parameters)
@@ -55,7 +57,7 @@ Most of the **additional** features are as follows:
* `--set-ipv6-probe-false` - (Not in `chrome://flags`) Forces the result of the browser's IPv6 probing (i.e. IPv6 connectivity test) to be unsuccessful. This causes IPv4 addresses to be prioritized over IPv6 addresses. Without this flag, the probing result is set to be successful, which causes IPv6 to be used over IPv4 when possible.
* Force all pop-ups into tabs
* Disable [Safe Browsing](//en.wikipedia.org/wiki/Google_Safe_Browsing)
* See the [FAQ](FAQ.md#why-is-safe-browsing-disabled)
* See the [FAQ](//ungoogled-software.github.io/ungoogled-chromium-wiki/faq#why-is-safe-browsing-disabled)
* Disable intranet redirect detector (extraneous DNS requests)
* This breaks captive portal detection, but captive portals still work.
* Add more URL schemes allowed for saving
@@ -70,7 +72,7 @@ Most of the **additional** features are as follows:
### Supported platforms and distributions
Currently, only desktop platforms are supported. Functionality of specific desktop platforms may vary across different releases. For more details, see [Statuses in the Wiki](//github.com/Eloston/ungoogled-chromium/wiki/statuses).
Currently, only desktop platforms are supported. Functionality of specific desktop platforms may vary across different releases. For more details, see [Statuses in the Wiki](//ungoogled-software.github.io/ungoogled-chromium-wiki/statuses).
Other platforms are discussed and tracked in GitHub's Issue Tracker. Learn more about using the Issue Tracker under the section [Contributing, Reporting, Contacting](#contributing-reporting-contacting).
@@ -101,17 +103,19 @@ Tags are versioned in the following format: `{chromium_version}-{release_revisio
* `chromium_version` is the version of Chromium used in `x.x.x.x` format, and
* `release_revision` is a number indicating the version of ungoogled-chromium for the corresponding Chromium version.
Not all tags are stable for all platforms. See the [Statuses in the Wiki](//ungoogled-software.github.io/ungoogled-chromium-wiki/statuses) to determine the tag to use.
## Frequently-asked questions
[See FAQ.md](FAQ.md)
[See the FAQ on the Wiki](//ungoogled-software.github.io/ungoogled-chromium-wiki/faq)
## Design and implementation
[See DESIGN.md](DESIGN.md)
[See docs/design.md](docs/design.md)
## Building
[See BUILDING.md](BUILDING.md)
[See docs/building.md](docs/building.md)
## Contributing, Reporting, Contacting
@@ -133,7 +137,7 @@ There is also a [Gitter chat room](https://gitter.im/ungoogled-software/Lobby) f
* [Inox patchset](//github.com/gcarq/inox-patchset)
* [Debian](//tracker.debian.org/pkg/chromium-browser)
* [Iridium Browser](//iridiumbrowser.de/)
* The users for testing and debugging, [contributing code](https://github.com/Eloston/ungoogled-chromium/graphs/contributors), providing feedback, or simply using ungoogled-chromium in some capacity.
* The users for testing and debugging, [contributing code](//github.com/Eloston/ungoogled-chromium/graphs/contributors), providing feedback, or simply using ungoogled-chromium in some capacity.
## License

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
# Copyright (c) 2017 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Simple buildkit CLI launcher allowing invocation from anywhere.
Pass in -h or --help for usage information.
"""
import sys
import pathlib
sys.path.insert(0, str(pathlib.Path(__file__).resolve().parent))
import buildkit.cli
sys.path.pop(0)
buildkit.cli.main()

View File

@@ -4,7 +4,6 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
CLI entry point when invoking the module directly

View File

@@ -4,60 +4,31 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
buildkit: A small helper utility for building ungoogled-chromium.
This is the CLI interface. Available commands each have their own help; pass in
-h or --help after a command.
buildkit has optional environment variables. They are as follows:
* BUILDKIT_RESOURCES - Path to the resources/ directory. Defaults to
the one in buildkit's parent directory.
* BUILDKIT_USER_BUNDLE - Path to the user config bundle. Without it, commands
that need a bundle default to buildspace/user_bundle. This value can be
overridden per-command with the --user-bundle option.
"""
import argparse
import os
from pathlib import Path
from . import config
from . import source_retrieval
from . import downloads
from . import domain_substitution
from .common import (
CONFIG_BUNDLES_DIR, BUILDSPACE_DOWNLOADS, BUILDSPACE_TREE,
BUILDSPACE_TREE_PACKAGING, BUILDSPACE_USER_BUNDLE, SEVENZIP_USE_REGISTRY,
BuildkitAbort, ExtractorEnum, get_resources_dir, get_logger)
from . import patches
from .common import SEVENZIP_USE_REGISTRY, BuildkitAbort, ExtractorEnum, get_logger
from .config import ConfigBundle
from .extraction import prune_dir
# Classes
class _CLIError(RuntimeError):
"""Custom exception for printing argument parser errors from callbacks"""
def get_basebundle_verbosely(base_name):
"""
Returns a ConfigBundle from the given base name, otherwise it logs errors and raises
BuildkitAbort"""
try:
return ConfigBundle.from_base_name(base_name)
except NotADirectoryError as exc:
get_logger().error('resources/ or resources/patches directories could not be found.')
raise BuildkitAbort()
except FileNotFoundError:
get_logger().error('The base config bundle "%s" does not exist.', base_name)
raise BuildkitAbort()
except ValueError as exc:
get_logger().error('Base bundle metadata has an issue: %s', exc)
raise BuildkitAbort()
except BaseException:
get_logger().exception('Unexpected exception caught.')
raise BuildkitAbort()
class NewBaseBundleAction(argparse.Action): #pylint: disable=too-few-public-methods
class NewBundleAction(argparse.Action): #pylint: disable=too-few-public-methods
"""argparse.ArgumentParser action handler with more verbose logging"""
def __init__(self, *args, **kwargs):
@@ -70,394 +41,253 @@ class NewBaseBundleAction(argparse.Action): #pylint: disable=too-few-public-meth
def __call__(self, parser, namespace, values, option_string=None):
try:
base_bundle = get_basebundle_verbosely(values)
except BuildkitAbort:
bundle = ConfigBundle(values)
except BaseException:
get_logger().exception('Error loading config bundle')
parser.exit(status=1)
setattr(namespace, self.dest, base_bundle)
setattr(namespace, self.dest, bundle)
# Methods
def _default_user_bundle_path():
"""Returns the default path to the buildspace user bundle."""
return os.getenv('BUILDKIT_USER_BUNDLE', default=BUILDSPACE_USER_BUNDLE)
def setup_bundle_group(parser):
"""Helper to add arguments for loading a config bundle to argparse.ArgumentParser"""
config_group = parser.add_mutually_exclusive_group()
config_group.add_argument(
'-b', '--base-bundle', metavar='NAME', dest='bundle', default=argparse.SUPPRESS,
action=NewBaseBundleAction,
help=('The base config bundle name to use (located in resources/config_bundles). '
'Mutually exclusive with --user-bundle. '
'Default value is nothing; a user bundle is used by default'))
config_group.add_argument(
'-u', '--user-bundle', metavar='PATH', dest='bundle',
default=_default_user_bundle_path(),
type=lambda x: ConfigBundle(Path(x)),
help=('The path to a user bundle to use. '
'Mutually exclusive with --base-bundle. Use BUILDKIT_USER_BUNDLE '
'to override the default value. Current default: %(default)s'))
def setup_bundle_arg(parser):
"""Helper to add an argparse.ArgumentParser argument for a config bundle"""
parser.add_argument(
'-b',
'--bundle',
metavar='PATH',
dest='bundle',
required=True,
action=NewBundleAction,
help='Path to the bundle. Dependencies must reside next to the bundle.')
def _add_bunnfo(subparsers):
"""Gets info about base bundles."""
def _callback(args):
if vars(args).get('list'):
for bundle_dir in sorted(
(get_resources_dir() / CONFIG_BUNDLES_DIR).iterdir()):
bundle_meta = config.BaseBundleMetaIni(
bundle_dir / config.BASEBUNDLEMETA_INI)
print(bundle_dir.name, '-', bundle_meta.display_name)
elif vars(args).get('bundle'):
for dependency in args.bundle.get_dependencies():
print(dependency)
else:
raise NotImplementedError()
parser = subparsers.add_parser(
'bunnfo', formatter_class=argparse.ArgumentDefaultsHelpFormatter,
help=_add_bunnfo.__doc__, description=_add_bunnfo.__doc__)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
'-l', '--list', action='store_true',
help='Lists all base bundles and their display names.')
group.add_argument(
'-d', '--dependencies', dest='bundle',
action=NewBaseBundleAction,
help=('Prints the dependency order of the given base bundle, '
'delimited by newline characters. '
'See DESIGN.md for the definition of dependency order.'))
parser.set_defaults(callback=_callback)
def _add_genbun(subparsers):
"""Generates a user config bundle from a base config bundle."""
def _callback(args):
def _add_downloads(subparsers):
"""Retrieve, check, and unpack downloads"""
def _add_common_args(parser):
setup_bundle_arg(parser)
parser.add_argument(
'-c',
'--cache',
type=Path,
required=True,
help='Path to the directory to cache downloads.')
def _retrieve_callback(args):
downloads.retrieve_downloads(args.bundle, args.cache, args.show_progress,
args.disable_ssl_verification)
try:
args.base_bundle.write(args.user_bundle_path)
except FileExistsError:
get_logger().error('User bundle dir is not empty: %s', args.user_bundle_path)
downloads.check_downloads(args.bundle, args.cache)
except downloads.HashMismatchError as exc:
get_logger().error('File checksum does not match: %s', exc)
raise _CLIError()
except ValueError as exc:
get_logger().error('Error with base bundle: %s', exc)
raise _CLIError()
parser = subparsers.add_parser(
'genbun', formatter_class=argparse.ArgumentDefaultsHelpFormatter,
help=_add_genbun.__doc__, description=_add_genbun.__doc__)
parser.add_argument(
'-u', '--user-bundle', metavar='PATH', dest='user_bundle_path',
type=Path, default=_default_user_bundle_path(),
help=('The output path for the user config bundle. '
'The path must not already exist. '))
parser.add_argument(
'base_bundle', action=NewBaseBundleAction,
help='The base config bundle name to use.')
parser.set_defaults(callback=_callback)
def _add_getsrc(subparsers):
"""Downloads, checks, and unpacks the necessary files into the buildspace tree"""
def _callback(args):
try:
extractors = {
ExtractorEnum.SEVENZIP: args.sevenz_path,
ExtractorEnum.TAR: args.tar_path,
}
source_retrieval.retrieve_and_extract(
config_bundle=args.bundle, buildspace_downloads=args.downloads,
buildspace_tree=args.tree, prune_binaries=args.prune_binaries,
show_progress=args.show_progress, extractors=extractors,
disable_ssl_verification=args.disable_ssl_verification)
except FileExistsError as exc:
get_logger().error('Directory is not empty: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error('Directory or file not found: %s', exc)
raise _CLIError()
except NotADirectoryError as exc:
get_logger().error('Path is not a directory: %s', exc)
raise _CLIError()
except source_retrieval.NotAFileError as exc:
get_logger().error('Archive path is not a regular file: %s', exc)
raise _CLIError()
except source_retrieval.HashMismatchError as exc:
get_logger().error('Archive checksum is invalid: %s', exc)
raise _CLIError()
def _unpack_callback(args):
extractors = {
ExtractorEnum.SEVENZIP: args.sevenz_path,
ExtractorEnum.TAR: args.tar_path,
}
downloads.unpack_downloads(args.bundle, args.cache, args.output, extractors)
# downloads
parser = subparsers.add_parser(
'getsrc', help=_add_getsrc.__doc__ + '.',
description=_add_getsrc.__doc__ + '; ' + (
'these are the Chromium source code and any extra dependencies. '
'By default, binary pruning is performed during extraction. '
'The %s directory must already exist for storing downloads. '
'If the buildspace tree already exists or there is a checksum mismatch, '
'this command will abort. '
'Only files that are missing will be downloaded. '
'If the files are already downloaded, their checksums are '
'confirmed and then they are unpacked.') % BUILDSPACE_DOWNLOADS)
setup_bundle_group(parser)
parser.add_argument(
'-t', '--tree', type=Path, default=BUILDSPACE_TREE,
help='The buildspace tree path. Default: %(default)s')
parser.add_argument(
'-d', '--downloads', type=Path, default=BUILDSPACE_DOWNLOADS,
help=('Path to store archives of Chromium source code and extra deps. '
'Default: %(default)s'))
parser.add_argument(
'--disable-binary-pruning', action='store_false', dest='prune_binaries',
help='Disables binary pruning during extraction.')
parser.add_argument(
'--hide-progress-bar', action='store_false', dest='show_progress',
'downloads', help=_add_downloads.__doc__ + '.', description=_add_downloads.__doc__)
subsubparsers = parser.add_subparsers(title='Download actions', dest='action')
subsubparsers.required = True # Workaround for http://bugs.python.org/issue9253#msg186387
# downloads retrieve
retrieve_parser = subsubparsers.add_parser(
'retrieve',
help='Retrieve and check download files',
description='Retrieves and checks downloads without unpacking.')
_add_common_args(retrieve_parser)
retrieve_parser.add_argument(
'--hide-progress-bar',
action='store_false',
dest='show_progress',
help='Hide the download progress.')
parser.add_argument(
'--tar-path', default='tar',
retrieve_parser.add_argument(
'--disable-ssl-verification',
action='store_true',
help='Disables certification verification for downloads using HTTPS.')
retrieve_parser.set_defaults(callback=_retrieve_callback)
# downloads unpack
unpack_parser = subsubparsers.add_parser(
'unpack',
help='Unpack download files',
description='Verifies hashes of and unpacks download files into the specified directory.')
_add_common_args(unpack_parser)
unpack_parser.add_argument(
'--tar-path',
default='tar',
help=('(Linux and macOS only) Command or path to the BSD or GNU tar '
'binary for extraction. Default: %(default)s'))
parser.add_argument(
'--7z-path', dest='sevenz_path', default=SEVENZIP_USE_REGISTRY,
unpack_parser.add_argument(
'--7z-path',
dest='sevenz_path',
default=SEVENZIP_USE_REGISTRY,
help=('Command or path to 7-Zip\'s "7z" binary. If "_use_registry" is '
'specified, determine the path from the registry. Default: %(default)s'))
parser.add_argument(
'--disable-ssl-verification', action='store_true',
help='Disables certification verification for downloads using HTTPS.')
unpack_parser.add_argument('output', type=Path, help='The directory to unpack to.')
unpack_parser.set_defaults(callback=_unpack_callback)
def _add_prune(subparsers):
"""Prunes binaries in the given path."""
def _callback(args):
if not args.directory.exists():
get_logger().error('Specified directory does not exist: %s', args.directory)
raise _CLIError()
unremovable_files = prune_dir(args.directory, args.bundle.pruning)
if unremovable_files:
get_logger().error('Files could not be pruned: %s', unremovable_files)
raise _CLIError()
parser = subparsers.add_parser('prune', help=_add_prune.__doc__, description=_add_prune.__doc__)
setup_bundle_arg(parser)
parser.add_argument('directory', type=Path, help='The directory to apply binary pruning.')
parser.set_defaults(callback=_callback)
def _add_prubin(subparsers):
"""Prunes binaries from the buildspace tree."""
def _callback(args):
logger = get_logger()
try:
resolved_tree = args.tree.resolve()
except FileNotFoundError as exc:
logger.error('File or directory does not exist: %s', exc)
raise _CLIError()
missing_file = False
for tree_node in args.bundle.pruning:
try:
(resolved_tree / tree_node).unlink()
except FileNotFoundError:
missing_file = True
logger.warning('No such file: %s', resolved_tree / tree_node)
if missing_file:
raise _CLIError()
parser = subparsers.add_parser(
'prubin', help=_add_prubin.__doc__, description=_add_prubin.__doc__ + (
' This is NOT necessary if the source code was already pruned '
'during the getsrc command.'))
setup_bundle_group(parser)
parser.add_argument(
'-t', '--tree', type=Path, default=BUILDSPACE_TREE,
help='The buildspace tree path to apply binary pruning. Default: %(default)s')
parser.set_defaults(callback=_callback)
def _add_subdom(subparsers):
"""Substitutes domain names in buildspace tree or patches with blockable strings."""
def _add_domains(subparsers):
"""Operations with domain substitution"""
def _callback(args):
try:
if not args.only or args.only == 'tree':
domain_substitution.process_tree_with_bundle(args.bundle, args.tree)
if not args.only or args.only == 'patches':
domain_substitution.process_bundle_patches(args.bundle)
if args.reverting:
domain_substitution.revert_substitution(args.cache, args.directory)
else:
domain_substitution.apply_substitution(args.bundle, args.directory, args.cache)
except FileExistsError as exc:
get_logger().error('File or directory already exists: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error('File or directory does not exist: %s', exc)
raise _CLIError()
except NotADirectoryError as exc:
get_logger().error('Patches directory does not exist: %s', exc)
raise _CLIError()
except KeyError as exc:
get_logger().error('%s', exc)
raise _CLIError()
# domains
parser = subparsers.add_parser(
'subdom', help=_add_subdom.__doc__, description=_add_subdom.__doc__ + (
' By default, it will substitute the domains on both the buildspace tree and '
'the bundle\'s patches.'))
setup_bundle_group(parser)
parser.add_argument(
'-o', '--only', choices=['tree', 'patches'],
help=('Specifies a component to exclusively apply domain substitution to. '
'"tree" is for the buildspace tree, and "patches" is for the bundle\'s patches.'))
parser.add_argument(
'-t', '--tree', type=Path, default=BUILDSPACE_TREE,
help=('The buildspace tree path to apply domain substitution. '
'Not applicable when --only is "patches". Default: %(default)s'))
'domains', help=_add_domains.__doc__, description=_add_domains.__doc__)
parser.set_defaults(callback=_callback)
def _add_genpkg_archlinux(subparsers):
"""Generates a PKGBUILD for Arch Linux"""
def _callback(args):
from .packaging import archlinux as packaging_archlinux
try:
packaging_archlinux.generate_packaging(
args.bundle, args.output, repo_version=args.repo_commit,
repo_hash=args.repo_hash)
except FileExistsError as exc:
get_logger().error('PKGBUILD already exists: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error(
'Output path is not an existing directory: %s', exc)
raise _CLIError()
parser = subparsers.add_parser(
'archlinux', help=_add_genpkg_archlinux.__doc__,
description=_add_genpkg_archlinux.__doc__)
parser.add_argument(
'-o', '--output', type=Path, default='buildspace',
help=('The directory to store packaging files. '
'It must exist and not already contain a PKGBUILD file. '
'Default: %(default)s'))
parser.add_argument(
'--repo-commit', action='store_const', const='git', default='bundle',
help=("Use the current git repo's commit hash to specify the "
"ungoogled-chromium repo to download instead of a tag determined "
"by the config bundle's version config file. Requires git to be "
"in PATH and buildkit to be invoked inside of a clone of "
"ungoogled-chromium's git repository."))
parser.add_argument(
'--repo-hash', default='SKIP',
help=('The SHA-256 hash to verify the archive of the ungoogled-chromium '
'repository to download within the PKGBUILD. If it is "compute", '
'the hash is computed by downloading the archive to memory and '
'computing the hash. If it is "SKIP", hash computation is skipped. '
'Default: %(default)s'))
parser.set_defaults(callback=_callback)
def _add_genpkg_debian(subparsers):
"""Generate Debian packaging files"""
def _callback(args):
from .packaging import debian as packaging_debian
try:
packaging_debian.generate_packaging(args.bundle, args.flavor, args.output)
except FileExistsError as exc:
get_logger().error('debian directory is not empty: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error(
'Parent directories do not exist for path: %s', exc)
raise _CLIError()
parser = subparsers.add_parser(
'debian', help=_add_genpkg_debian.__doc__, description=_add_genpkg_debian.__doc__)
parser.add_argument(
'-f', '--flavor', required=True, help='The Debian packaging flavor to use.')
parser.add_argument(
'-o', '--output', type=Path, default='%s/debian' % BUILDSPACE_TREE,
help=('The path to the debian directory to be created. '
'It must not already exist, but the parent directories must exist. '
'Default: %(default)s'))
parser.set_defaults(callback=_callback)
def _add_genpkg_linux_simple(subparsers):
"""Generate Linux Simple packaging files"""
def _callback(args):
from .packaging import linux_simple as packaging_linux_simple
try:
packaging_linux_simple.generate_packaging(args.bundle, args.output)
except FileExistsError as exc:
get_logger().error('Output directory is not empty: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error(
'Parent directories do not exist for path: %s', exc)
raise _CLIError()
parser = subparsers.add_parser(
'linux_simple', help=_add_genpkg_linux_simple.__doc__,
description=_add_genpkg_linux_simple.__doc__)
parser.add_argument(
'-o', '--output', type=Path, default=BUILDSPACE_TREE_PACKAGING,
help=('The directory to store packaging files. '
'It must not already exist, but the parent directories must exist. '
'Default: %(default)s'))
parser.set_defaults(callback=_callback)
def _add_genpkg_opensuse(subparsers):
"""Generate OpenSUSE packaging files"""
def _callback(args):
from .packaging import opensuse as packaging_opensuse
try:
packaging_opensuse.generate_packaging(args.bundle, args.output)
except FileExistsError as exc:
get_logger().error('Output directory is not empty: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error(
'Parent directories do not exist for path: %s', exc)
raise _CLIError()
parser = subparsers.add_parser(
'opensuse', help=_add_genpkg_opensuse.__doc__,
description=_add_genpkg_opensuse.__doc__)
parser.add_argument(
'-o', '--output', type=Path, default=BUILDSPACE_TREE_PACKAGING,
help=('The directory to store packaging files. '
'It must not already exist, but the parent directories must exist. '
'Default: %(default)s'))
parser.set_defaults(callback=_callback)
def _add_genpkg_windows(subparsers):
"""Generate Microsoft Windows packaging files"""
def _callback(args):
from .packaging import windows as packaging_windows
try:
packaging_windows.generate_packaging(args.bundle, args.output)
except FileExistsError as exc:
get_logger().error('Output directory is not empty: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error(
'Parent directories do not exist for path: %s', exc)
raise _CLIError()
parser = subparsers.add_parser(
'windows', help=_add_genpkg_windows.__doc__,
description=_add_genpkg_windows.__doc__)
parser.add_argument(
'-o', '--output', type=Path, default=BUILDSPACE_TREE_PACKAGING,
help=('The directory to store packaging files. '
'It must not already exist, but the parent directories must exist. '
'Default: %(default)s'))
parser.set_defaults(callback=_callback)
def _add_genpkg_macos(subparsers):
"""Generate macOS packaging files"""
def _callback(args):
from .packaging import macos as packaging_macos
try:
packaging_macos.generate_packaging(args.bundle, args.output)
except FileExistsError as exc:
get_logger().error('Output directory is not empty: %s', exc)
raise _CLIError()
except FileNotFoundError as exc:
get_logger().error(
'Parent directories do not exist for path: %s', exc)
raise _CLIError()
parser = subparsers.add_parser(
'macos', help=_add_genpkg_macos.__doc__, description=_add_genpkg_macos.__doc__)
parser.add_argument(
'-o', '--output', type=Path, default=BUILDSPACE_TREE_PACKAGING,
help=('The directory to store packaging files. '
'It must not already exist, but the parent directories must exist. '
'Default: %(default)s'))
parser.set_defaults(callback=_callback)
def _add_genpkg(subparsers):
"""Generates a packaging script."""
parser = subparsers.add_parser(
'genpkg', help=_add_genpkg.__doc__,
description=_add_genpkg.__doc__ + ' Specify no arguments to get a list of different types.')
setup_bundle_group(parser)
# Add subcommands to genpkg for handling different packaging types in the same manner as main()
# However, the top-level argparse.ArgumentParser will be passed the callback.
subsubparsers = parser.add_subparsers(title='Available packaging types', dest='packaging')
subsubparsers = parser.add_subparsers(title='', dest='packaging')
subsubparsers.required = True # Workaround for http://bugs.python.org/issue9253#msg186387
_add_genpkg_archlinux(subsubparsers)
_add_genpkg_debian(subsubparsers)
_add_genpkg_linux_simple(subsubparsers)
_add_genpkg_opensuse(subsubparsers)
_add_genpkg_windows(subsubparsers)
_add_genpkg_macos(subsubparsers)
# domains apply
apply_parser = subsubparsers.add_parser(
'apply',
help='Apply domain substitution',
description='Applies domain substitution and creates the domain substitution cache.')
setup_bundle_arg(apply_parser)
apply_parser.add_argument(
'-c',
'--cache',
type=Path,
required=True,
help='The path to the domain substitution cache. The path must not already exist.')
apply_parser.add_argument(
'directory', type=Path, help='The directory to apply domain substitution')
apply_parser.set_defaults(reverting=False)
# domains revert
revert_parser = subsubparsers.add_parser(
'revert',
help='Revert domain substitution',
description='Reverts domain substitution based only on the domain substitution cache.')
revert_parser.add_argument(
'directory', type=Path, help='The directory to reverse domain substitution')
revert_parser.add_argument(
'-c',
'--cache',
type=Path,
required=True,
help=('The path to the domain substitution cache. '
'The path must exist and will be removed if successful.'))
revert_parser.set_defaults(reverting=True)
def _add_patches(subparsers):
"""Operations with patches"""
def _export_callback(args):
patches.export_patches(args.bundle, args.output)
def _apply_callback(args):
patches.apply_patches(
patches.patch_paths_by_bundle(args.bundle),
args.directory,
patch_bin_path=args.patch_bin)
# patches
parser = subparsers.add_parser(
'patches', help=_add_patches.__doc__, description=_add_patches.__doc__)
subsubparsers = parser.add_subparsers(title='Patches actions')
subsubparsers.required = True
# patches export
export_parser = subsubparsers.add_parser(
'export',
help='Export patches in GNU quilt-compatible format',
description='Export a config bundle\'s patches to a quilt-compatible format')
setup_bundle_arg(export_parser)
export_parser.add_argument(
'output',
type=Path,
help='The directory to write to. It must either be empty or not exist.')
export_parser.set_defaults(callback=_export_callback)
# patches apply
apply_parser = subsubparsers.add_parser(
'apply', help='Applies a config bundle\'s patches to the specified source tree')
setup_bundle_arg(apply_parser)
apply_parser.add_argument(
'--patch-bin', help='The GNU patch command to use. Omit to find it automatically.')
apply_parser.add_argument('directory', type=Path, help='The source tree to apply patches.')
apply_parser.set_defaults(callback=_apply_callback)
def _add_gnargs(subparsers):
"""Operations with GN arguments"""
def _print_callback(args):
print(str(args.bundle.gn_flags), end='')
# gnargs
parser = subparsers.add_parser(
'gnargs', help=_add_gnargs.__doc__, description=_add_gnargs.__doc__)
subsubparsers = parser.add_subparsers(title='GN args actions')
# gnargs print
print_parser = subsubparsers.add_parser(
'print',
help='Prints GN args in args.gn format',
description='Prints a list of GN args in args.gn format to standard output')
setup_bundle_arg(print_parser)
print_parser.set_defaults(callback=_print_callback)
def main(arg_list=None):
"""CLI entry point"""
parser = argparse.ArgumentParser(description=__doc__,
formatter_class=argparse.RawTextHelpFormatter)
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawTextHelpFormatter)
subparsers = parser.add_subparsers(title='Available commands', dest='command')
subparsers.required = True # Workaround for http://bugs.python.org/issue9253#msg186387
_add_bunnfo(subparsers)
_add_genbun(subparsers)
_add_getsrc(subparsers)
_add_prubin(subparsers)
_add_subdom(subparsers)
_add_genpkg(subparsers)
_add_downloads(subparsers)
_add_prune(subparsers)
_add_domains(subparsers)
_add_patches(subparsers)
_add_gnargs(subparsers)
args = parser.parse_args(args=arg_list)
try:

View File

@@ -3,37 +3,52 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Common code and constants"""
import configparser
import enum
import os
import logging
import platform
from pathlib import Path
from .third_party import schema
# Constants
ENCODING = 'UTF-8' # For config files and patches
CONFIG_BUNDLES_DIR = "config_bundles"
PACKAGING_DIR = "packaging"
PATCHES_DIR = "patches"
BUILDSPACE_DOWNLOADS = 'buildspace/downloads'
BUILDSPACE_TREE = 'buildspace/tree'
BUILDSPACE_TREE_PACKAGING = 'buildspace/tree/ungoogled_packaging'
BUILDSPACE_USER_BUNDLE = 'buildspace/user_bundle'
SEVENZIP_USE_REGISTRY = '_use_registry'
_ENV_FORMAT = "BUILDKIT_{}"
_VERSION_INI_PATH = Path(__file__).parent.parent / 'version.ini'
_VERSION_SCHEMA = schema.Schema({
'version': {
'chromium_version': schema.And(str, len),
'release_revision': schema.And(str, len),
}
})
# Helpers for third_party.schema
def schema_dictcast(data):
"""Cast data to dictionary for third_party.schema and configparser data structures"""
return schema.And(schema.Use(dict), data)
def schema_inisections(data):
"""Cast configparser data structure to dict and remove DEFAULT section"""
return schema_dictcast({configparser.DEFAULTSECT: object, **data})
# Public classes
class BuildkitError(Exception):
"""Represents a generic custom error from buildkit"""
class BuildkitAbort(BuildkitError):
"""
Exception thrown when all details have been logged and buildkit aborts.
@@ -41,20 +56,24 @@ class BuildkitAbort(BuildkitError):
It should only be caught by the user of buildkit's library interface.
"""
class PlatformEnum(enum.Enum):
"""Enum for platforms that need distinction for certain functionality"""
UNIX = 'unix' # Currently covers anything that isn't Windows
WINDOWS = 'windows'
class ExtractorEnum: #pylint: disable=too-few-public-methods
"""Enum for extraction binaries"""
SEVENZIP = '7z'
TAR = 'tar'
# Public methods
def get_logger(name=__package__, initial_level=logging.DEBUG,
prepend_timestamp=True, log_init=True):
def get_logger(name=__package__, initial_level=logging.DEBUG, prepend_timestamp=True,
log_init=True):
'''Gets the named logger'''
logger = logging.getLogger(name)
@@ -80,23 +99,6 @@ def get_logger(name=__package__, initial_level=logging.DEBUG,
logger.debug("Initialized logger '%s'", name)
return logger
def get_resources_dir():
"""
Returns the path to the root of the resources directory
Raises NotADirectoryError if the directory is not found.
"""
env_value = os.environ.get(_ENV_FORMAT.format('RESOURCES'))
if env_value:
path = Path(env_value)
get_logger().debug(
'Using %s environment variable value: %s', _ENV_FORMAT.format('RESOURCES'), path)
else:
# Assume that this resides in the repository
path = Path(__file__).absolute().parent.parent / 'resources'
if not path.is_dir():
raise NotADirectoryError(str(path))
return path
def dir_empty(path):
"""
@@ -110,6 +112,7 @@ def dir_empty(path):
return True
return False
def ensure_empty_dir(path, parents=False):
"""
Makes a directory at path if it doesn't exist. If it exists, check if it is empty.
@@ -125,6 +128,7 @@ def ensure_empty_dir(path, parents=False):
if not dir_empty(path):
raise exc
def get_running_platform():
"""
Returns a PlatformEnum value indicating the platform that buildkit is running on.
@@ -137,3 +141,41 @@ def get_running_platform():
return PlatformEnum.WINDOWS
# Only Windows and UNIX-based platforms need to be distinguished right now.
return PlatformEnum.UNIX
def _ini_section_generator(ini_parser):
"""
Yields tuples of a section name and its corresponding dictionary of keys and values
"""
for section in ini_parser:
if section == configparser.DEFAULTSECT:
continue
yield section, dict(ini_parser.items(section))
def validate_and_get_ini(ini_path, ini_schema):
"""
Validates and returns the parsed INI
"""
parser = configparser.ConfigParser()
with ini_path.open(encoding=ENCODING) as ini_file: #pylint: disable=no-member
parser.read_file(ini_file, source=str(ini_path))
try:
ini_schema.validate(dict(_ini_section_generator(parser)))
except schema.SchemaError as exc:
get_logger().error('%s failed schema validation at: %s', ini_path.name, ini_path)
raise exc
return parser
def get_chromium_version():
"""Returns the Chromium version."""
return _VERSION_INI['version']['chromium_version']
def get_release_revision():
"""Returns the Chromium version."""
return _VERSION_INI['version']['release_revision']
_VERSION_INI = validate_and_get_ini(_VERSION_INI_PATH, _VERSION_SCHEMA)

File diff suppressed because it is too large Load Diff

View File

@@ -3,118 +3,226 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Module for substituting domain names in buildspace tree with blockable strings.
Module for substituting domain names in the source tree with blockable strings.
"""
from .common import ENCODING, BuildkitAbort, get_logger
from .third_party import unidiff
import io
import re
import tarfile
import tempfile
import zlib
from pathlib import Path
# Encodings to try on buildspace tree files
from .extraction import extract_tar_file
from .common import ENCODING, get_logger
# Encodings to try on source tree files
TREE_ENCODINGS = (ENCODING, 'ISO-8859-1')
def substitute_domains_for_files(regex_iter, file_iter, log_warnings=True):
"""
Runs domain substitution with regex_iter over files from file_iter
# Constants for domain substitution cache
_INDEX_LIST = 'cache_index.list'
_INDEX_HASH_DELIMITER = '|'
_ORIG_DIR = 'orig'
regex_iter is an iterable of pattern and replacement regex pair tuples
file_iter is an iterable of pathlib.Path to files that are to be domain substituted
log_warnings indicates if a warning is logged when a file has no matches.
"""
encoding = None # To satisfy pylint undefined-loop-variable warning
for path in file_iter:
with path.open(mode="r+b") as file_obj:
file_bytes = file_obj.read()
content = None
for encoding in TREE_ENCODINGS:
try:
content = file_bytes.decode(encoding)
break
except UnicodeDecodeError:
continue
if not content:
get_logger().error('Unable to decode with any encoding: %s', path)
raise BuildkitAbort()
file_subs = 0
for regex_pair in regex_iter:
content, sub_count = regex_pair.pattern.subn(
regex_pair.replacement, content)
file_subs += sub_count
if file_subs > 0:
file_obj.seek(0)
file_obj.write(content.encode(encoding))
file_obj.truncate()
elif log_warnings:
get_logger().warning('File has no matches: %s', path)
# Private Methods
def substitute_domains_in_patches(regex_iter, file_set, patch_iter, log_warnings=False):
"""
Runs domain substitution over sections of the given unified diffs patching the given files.
regex_iter is an iterable of tuples containing the compiled search regex followed by
the replacement regex.
file_set is the set of files as strings that should have domain substitution
applied to their sections.
patch_iter is an iterable that returns pathlib.Path to patches that should be
checked and substituted.
log_warnings indicates if a warning is logged when no substitutions are performed
Raises BuildkitAbort if a unified diff could not be parsed.
def _substitute_path(path, regex_iter):
"""
for patch_path in patch_iter:
with patch_path.open('r+', encoding=ENCODING) as file_obj:
Perform domain substitution on path and add it to the domain substitution cache.
path is a pathlib.Path to the file to be domain substituted.
regex_iter is an iterable of regular expression namedtuple like from
config.DomainRegexList.get_pairs()
Returns a tuple of the CRC32 hash of the substituted raw content and the
original raw content; None for both entries if no substitutions were made.
Raises FileNotFoundError if path does not exist.
Raises UnicodeDecodeError if path's contents cannot be decoded.
"""
with path.open('r+b') as input_file:
original_content = input_file.read()
if not original_content:
return (None, None)
content = None
encoding = None
for encoding in TREE_ENCODINGS:
try:
patchset = unidiff.PatchSet(file_obj.read())
except unidiff.errors.UnidiffParseError:
get_logger().exception('Could not parse patch: %s', patch_path)
raise BuildkitAbort()
file_subs = 0
for patchedfile in patchset:
if patchedfile.path not in file_set:
continue
for regex_pair in regex_iter:
for hunk in patchedfile:
for line in hunk:
line.value, sub_count = regex_pair.pattern.subn(
regex_pair.replacement, line.value)
file_subs += sub_count
if file_subs > 0:
file_obj.seek(0)
file_obj.write(str(patchset))
file_obj.truncate()
elif log_warnings:
get_logger().warning('Patch "%s" has no matches', patch_path)
content = original_content.decode(encoding)
break
except UnicodeDecodeError:
continue
if not content:
raise UnicodeDecodeError('Unable to decode with any encoding: %s' % path)
file_subs = 0
for regex_pair in regex_iter:
content, sub_count = regex_pair.pattern.subn(regex_pair.replacement, content)
file_subs += sub_count
if file_subs > 0:
substituted_content = content.encode(encoding)
input_file.seek(0)
input_file.write(content.encode(encoding))
input_file.truncate()
return (zlib.crc32(substituted_content), original_content)
return (None, None)
def process_bundle_patches(config_bundle, invert=False):
def _validate_file_index(index_file, resolved_tree, cache_index_files):
"""
Substitute domains in config bundle patches
Validation of file index and hashes against the source tree.
Updates cache_index_files
config_bundle is a config.ConfigBundle that will have its patches modified.
invert specifies if domain substitution should be inverted
Raises NotADirectoryError if the patches directory is not a directory or does not exist
If invert=True, raises ValueError if a regex pair isn't invertible.
If invert=True, may raise undetermined exceptions during regex pair inversion
Returns True if the file index is valid; False otherwise
"""
substitute_domains_in_patches(
config_bundle.domain_regex.get_pairs(invert=invert),
set(config_bundle.domain_substitution),
config_bundle.patches.patch_iter())
all_hashes_valid = True
crc32_regex = re.compile(r'^[a-zA-Z0-9]{8}$')
for entry in index_file.read().decode(ENCODING).splitlines():
try:
relative_path, file_hash = entry.split(_INDEX_HASH_DELIMITER)
except ValueError as exc:
get_logger().error('Could not split entry "%s": %s', entry, exc)
continue
if not relative_path or not file_hash:
get_logger().error('Entry %s of domain substitution cache file index is not valid',
_INDEX_HASH_DELIMITER.join((relative_path, file_hash)))
all_hashes_valid = False
continue
if not crc32_regex.match(file_hash):
get_logger().error('File index hash for %s does not appear to be a CRC32 hash',
relative_path)
all_hashes_valid = False
continue
if zlib.crc32((resolved_tree / relative_path).read_bytes()) != int(file_hash, 16):
get_logger().error('Hashes do not match for: %s', relative_path)
all_hashes_valid = False
continue
if relative_path in cache_index_files:
get_logger().error('File %s shows up at least twice in the file index', relative_path)
all_hashes_valid = False
continue
cache_index_files.add(relative_path)
return all_hashes_valid
def process_tree_with_bundle(config_bundle, buildspace_tree):
# Public Methods
def apply_substitution(config_bundle, source_tree, domainsub_cache):
"""
Substitute domains in buildspace_tree with files and substitutions from config_bundle
Substitute domains in source_tree with files and substitutions from config_bundle,
and save the pre-domain substitution archive to presubdom_archive.
config_bundle is a config.ConfigBundle
buildspace_tree is a pathlib.Path to the buildspace tree.
source_tree is a pathlib.Path to the source tree.
domainsub_cache is a pathlib.Path to the domain substitution cache.
Raises NotADirectoryError if the patches directory is not a directory or does not exist
Raises FileNotFoundError if the buildspace tree does not exist.
Raises FileNotFoundError if the source tree or required directory does not exist.
Raises FileExistsError if the domain substitution cache already exists.
Raises ValueError if an entry in the domain substitution list contains the file index
hash delimiter.
"""
if not buildspace_tree.exists():
raise FileNotFoundError(buildspace_tree)
resolved_tree = buildspace_tree.resolve()
substitute_domains_for_files(
config_bundle.domain_regex.get_pairs(),
map(lambda x: resolved_tree / x, config_bundle.domain_substitution))
if not source_tree.exists():
raise FileNotFoundError(source_tree)
if domainsub_cache.exists():
raise FileExistsError(domainsub_cache)
resolved_tree = source_tree.resolve()
regex_pairs = config_bundle.domain_regex.get_pairs()
fileindex_content = io.BytesIO()
with tarfile.open(
str(domainsub_cache), 'w:%s' % domainsub_cache.suffix[1:],
compresslevel=1) as cache_tar:
orig_dir = Path(_ORIG_DIR)
for relative_path in config_bundle.domain_substitution:
if _INDEX_HASH_DELIMITER in relative_path:
# Cache tar will be incomplete; remove it for convenience
cache_tar.close()
domainsub_cache.unlink()
raise ValueError(
'Path "%s" contains the file index hash delimiter "%s"' % relative_path,
_INDEX_HASH_DELIMITER)
path = resolved_tree / relative_path
if not path.exists():
get_logger().warning('Skipping non-existant path: %s', path)
crc32_hash, orig_content = _substitute_path(path, regex_pairs)
if crc32_hash is None:
get_logger().info('Path has no substitutions: %s', relative_path)
continue
fileindex_content.write('{}{}{:08x}\n'.format(relative_path, _INDEX_HASH_DELIMITER,
crc32_hash).encode(ENCODING))
orig_tarinfo = tarfile.TarInfo(str(orig_dir / relative_path))
orig_tarinfo.size = len(orig_content)
with io.BytesIO(orig_content) as orig_file:
cache_tar.addfile(orig_tarinfo, orig_file)
fileindex_tarinfo = tarfile.TarInfo(_INDEX_LIST)
fileindex_tarinfo.size = fileindex_content.tell()
fileindex_content.seek(0)
cache_tar.addfile(fileindex_tarinfo, fileindex_content)
def revert_substitution(domainsub_cache, source_tree):
"""
Revert domain substitution on source_tree using the pre-domain
substitution archive presubdom_archive.
It first checks if the hashes of the substituted files match the hashes
computed during the creation of the domain substitution cache, raising
KeyError if there are any mismatches. Then, it proceeds to
reverting files in the source_tree.
domainsub_cache is removed only if all the files from the domain substitution cache
were relocated to the source tree.
domainsub_cache is a pathlib.Path to the domain substitution cache.
source_tree is a pathlib.Path to the source tree.
Raises KeyError if:
* There is a hash mismatch while validating the cache
* The cache's file index is corrupt or missing
* The cache is corrupt or is not consistent with the file index
Raises FileNotFoundError if the source tree or domain substitution cache do not exist.
"""
# This implementation trades disk space/wear for performance (unless a ramdisk is used
# for the source tree)
# Assumptions made for this process:
# * The correct tar file was provided (so no huge amount of space is wasted)
# * The tar file is well-behaved (e.g. no files extracted outside of destination path)
# * Cache file index and cache contents are already consistent (i.e. no files exclusive to
# one or the other)
if not domainsub_cache.exists():
raise FileNotFoundError(domainsub_cache)
if not source_tree.exists():
raise FileNotFoundError(source_tree)
resolved_tree = source_tree.resolve()
cache_index_files = set() # All files in the file index
with tempfile.TemporaryDirectory(
prefix='domsubcache_files', dir=str(resolved_tree)) as tmp_extract_name:
extract_path = Path(tmp_extract_name)
get_logger().debug('Extracting domain substitution cache...')
extract_tar_file(domainsub_cache, extract_path, Path())
# Validate source tree file hashes match
get_logger().debug('Validating substituted files in source tree...')
with (extract_path / _INDEX_LIST).open('rb') as index_file: #pylint: disable=no-member
if not _validate_file_index(index_file, resolved_tree, cache_index_files):
raise KeyError('Domain substitution cache file index is corrupt or hashes mismatch '
'the source tree.')
# Move original files over substituted ones
get_logger().debug('Moving original files over substituted ones...')
for relative_path in cache_index_files:
(extract_path / _ORIG_DIR / relative_path).replace(resolved_tree / relative_path)
# Quick check for unused files in cache
orig_has_unused = False
for orig_path in (extract_path / _ORIG_DIR).rglob('*'): #pylint: disable=no-member
if orig_path.is_file():
get_logger().warning('Unused file from cache: %s', orig_path)
orig_has_unused = True
if orig_has_unused:
get_logger().warning('Cache contains unused files. Not removing.')
else:
domainsub_cache.unlink()

198
buildkit/downloads.py Normal file
View File

@@ -0,0 +1,198 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Module for the downloading, checking, and unpacking of necessary files into the source tree
"""
import enum
import urllib.request
import hashlib
from pathlib import Path
from .common import ENCODING, BuildkitError, ExtractorEnum, get_logger
from .extraction import extract_tar_file, extract_with_7z
# Constants
class HashesURLEnum(str, enum.Enum):
"""Enum for supported hash URL schemes"""
chromium = 'chromium'
# Custom Exceptions
class HashMismatchError(BuildkitError):
"""Exception for computed hashes not matching expected hashes"""
pass
class _UrlRetrieveReportHook: #pylint: disable=too-few-public-methods
"""Hook for urllib.request.urlretrieve to log progress information to console"""
def __init__(self):
self._max_len_printed = 0
self._last_percentage = None
def __call__(self, block_count, block_size, total_size):
downloaded_estimate = block_count * block_size
percentage = round(downloaded_estimate / total_size, ndigits=3)
if percentage == self._last_percentage:
return # Do not needlessly update the console
self._last_percentage = percentage
print('\r' + ' ' * self._max_len_printed, end='')
if total_size > 0:
status_line = 'Progress: {:.1%} of {:,d} B'.format(percentage, total_size)
else:
status_line = 'Progress: {:,d} B of unknown size'.format(downloaded_estimate)
self._max_len_printed = len(status_line)
print('\r' + status_line, end='')
def _download_if_needed(file_path, url, show_progress):
"""
Downloads a file from url to the specified path file_path if necessary.
If show_progress is True, download progress is printed to the console.
"""
if file_path.exists():
get_logger().info('%s already exists. Skipping download.', file_path)
else:
get_logger().info('Downloading %s ...', file_path)
reporthook = None
if show_progress:
reporthook = _UrlRetrieveReportHook()
urllib.request.urlretrieve(url, str(file_path), reporthook=reporthook)
if show_progress:
print()
def _chromium_hashes_generator(hashes_path):
with hashes_path.open(encoding=ENCODING) as hashes_file:
hash_lines = hashes_file.read().splitlines()
for hash_name, hash_hex, _ in map(lambda x: x.lower().split(' '), hash_lines):
if hash_name in hashlib.algorithms_available:
yield hash_name, hash_hex
else:
get_logger().warning('Skipping unknown hash algorithm: %s', hash_name)
def _downloads_iter(config_bundle):
"""Iterator for the downloads ordered by output path"""
return sorted(config_bundle.downloads, key=(lambda x: str(Path(x.output_path))))
def _get_hash_pairs(download_properties, cache_dir):
"""Generator of (hash_name, hash_hex) for the given download"""
for entry_type, entry_value in download_properties.hashes.items():
if entry_type == 'hash_url':
hash_processor, hash_filename, _ = entry_value
if hash_processor == 'chromium':
yield from _chromium_hashes_generator(cache_dir / hash_filename)
else:
raise ValueError('Unknown hash_url processor: %s' % hash_processor)
else:
yield entry_type, entry_value
def retrieve_downloads(config_bundle, cache_dir, show_progress, disable_ssl_verification=False):
"""
Retrieve downloads into the downloads cache.
config_bundle is the config.ConfigBundle to retrieve downloads for.
cache_dir is the pathlib.Path to the downloads cache.
show_progress is a boolean indicating if download progress is printed to the console.
disable_ssl_verification is a boolean indicating if certificate verification
should be disabled for downloads using HTTPS.
Raises FileNotFoundError if the downloads path does not exist.
Raises NotADirectoryError if the downloads path is not a directory.
"""
if not cache_dir.exists():
raise FileNotFoundError(cache_dir)
if not cache_dir.is_dir():
raise NotADirectoryError(cache_dir)
if disable_ssl_verification:
import ssl
# TODO: Remove this or properly implement disabling SSL certificate verification
orig_https_context = ssl._create_default_https_context #pylint: disable=protected-access
ssl._create_default_https_context = ssl._create_unverified_context #pylint: disable=protected-access
try:
for download_name in _downloads_iter(config_bundle):
download_properties = config_bundle.downloads[download_name]
get_logger().info('Downloading "%s" to "%s" ...', download_name,
download_properties.download_filename)
download_path = cache_dir / download_properties.download_filename
_download_if_needed(download_path, download_properties.url, show_progress)
if download_properties.has_hash_url():
get_logger().info('Downloading hashes for "%s"', download_name)
_, hash_filename, hash_url = download_properties.hashes['hash_url']
_download_if_needed(cache_dir / hash_filename, hash_url, show_progress)
finally:
# Try to reduce damage of hack by reverting original HTTPS context ASAP
if disable_ssl_verification:
ssl._create_default_https_context = orig_https_context #pylint: disable=protected-access
def check_downloads(config_bundle, cache_dir):
"""
Check integrity of the downloads cache.
config_bundle is the config.ConfigBundle to unpack downloads for.
cache_dir is the pathlib.Path to the downloads cache.
Raises source_retrieval.HashMismatchError when the computed and expected hashes do not match.
"""
for download_name in _downloads_iter(config_bundle):
get_logger().info('Verifying hashes for "%s" ...', download_name)
download_properties = config_bundle.downloads[download_name]
download_path = cache_dir / download_properties.download_filename
with download_path.open('rb') as file_obj:
archive_data = file_obj.read()
for hash_name, hash_hex in _get_hash_pairs(download_properties, cache_dir):
get_logger().debug('Verifying %s hash...', hash_name)
hasher = hashlib.new(hash_name, data=archive_data)
if not hasher.hexdigest().lower() == hash_hex.lower():
raise HashMismatchError(download_path)
def unpack_downloads(config_bundle, cache_dir, output_dir, extractors=None):
"""
Unpack downloads in the downloads cache to output_dir. Assumes all downloads are retrieved.
config_bundle is the config.ConfigBundle to unpack downloads for.
cache_dir is the pathlib.Path directory containing the download cache
output_dir is the pathlib.Path directory to unpack the downloads to.
extractors is a dictionary of PlatformEnum to a command or path to the
extractor binary. Defaults to 'tar' for tar, and '_use_registry' for 7-Zip.
May raise undetermined exceptions during archive unpacking.
"""
for download_name in _downloads_iter(config_bundle):
download_properties = config_bundle.downloads[download_name]
download_path = cache_dir / download_properties.download_filename
get_logger().info('Unpacking "%s" to %s ...', download_name,
download_properties.output_path)
extractor_name = download_properties.extractor or ExtractorEnum.TAR
if extractor_name == ExtractorEnum.SEVENZIP:
extractor_func = extract_with_7z
elif extractor_name == ExtractorEnum.TAR:
extractor_func = extract_tar_file
else:
raise NotImplementedError(extractor_name)
if download_properties.strip_leading_dirs is None:
strip_leading_dirs_path = None
else:
strip_leading_dirs_path = Path(download_properties.strip_leading_dirs)
extractor_func(
archive_path=download_path,
output_dir=output_dir,
unpack_dir=Path(download_properties.output_path),
relative_to=strip_leading_dirs_path,
extractors=extractors)

View File

@@ -3,7 +3,6 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Archive extraction utilities
"""
@@ -14,15 +13,15 @@ import subprocess
import tarfile
from pathlib import Path, PurePosixPath
from .common import (
SEVENZIP_USE_REGISTRY, BuildkitAbort, PlatformEnum, ExtractorEnum, get_logger,
get_running_platform)
from .common import (SEVENZIP_USE_REGISTRY, BuildkitAbort, PlatformEnum, ExtractorEnum, get_logger,
get_running_platform)
DEFAULT_EXTRACTORS = {
ExtractorEnum.SEVENZIP: SEVENZIP_USE_REGISTRY,
ExtractorEnum.TAR: 'tar',
}
def _find_7z_by_registry():
"""
Return a string to 7-zip's 7z.exe from the Windows Registry.
@@ -42,6 +41,7 @@ def _find_7z_by_registry():
get_logger().error('7z.exe not found at path from registry: %s', sevenzip_path)
return sevenzip_path
def _find_extractor_by_cmd(extractor_cmd):
"""Returns a string path to the binary; None if it couldn't be found"""
if not extractor_cmd:
@@ -50,6 +50,7 @@ def _find_extractor_by_cmd(extractor_cmd):
return extractor_cmd
return shutil.which(extractor_cmd)
def _process_relative_to(unpack_root, relative_to):
"""
For an extractor that doesn't support an automatic transform, move the extracted
@@ -57,40 +58,41 @@ def _process_relative_to(unpack_root, relative_to):
"""
relative_root = unpack_root / relative_to
if not relative_root.is_dir():
get_logger().error(
'Could not find relative_to directory in extracted files: %s', relative_to)
get_logger().error('Could not find relative_to directory in extracted files: %s',
relative_to)
raise BuildkitAbort()
for src_path in relative_root.iterdir():
dest_path = unpack_root / src_path.name
src_path.rename(dest_path)
relative_root.rmdir()
def _prune_tree(unpack_root, ignore_files):
def prune_dir(unpack_root, ignore_files):
"""
Run through the list of pruned files, delete them, and remove them from the set
Delete files under unpack_root listed in ignore_files. Returns an iterable of unremovable files.
unpack_root is a pathlib.Path to the directory to be pruned
ignore_files is an iterable of files to be removed.
"""
deleted_files = set()
unremovable_files = set()
for relative_file in ignore_files:
file_path = unpack_root / relative_file
if not file_path.is_file():
continue
file_path.unlink()
deleted_files.add(Path(relative_file).as_posix())
for deleted_path in deleted_files:
ignore_files.remove(deleted_path)
try:
file_path.unlink()
except FileNotFoundError:
unremovable_files.add(Path(relative_file).as_posix())
return unremovable_files
def _extract_tar_with_7z(binary, archive_path, buildspace_tree, unpack_dir, ignore_files, #pylint: disable=too-many-arguments
relative_to):
def _extract_tar_with_7z(binary, archive_path, output_dir, relative_to):
get_logger().debug('Using 7-zip extractor')
out_dir = buildspace_tree / unpack_dir
if not relative_to is None and (out_dir / relative_to).exists():
get_logger().error(
'Temporary unpacking directory already exists: %s', out_dir / relative_to)
if not relative_to is None and (output_dir / relative_to).exists():
get_logger().error('Temporary unpacking directory already exists: %s',
output_dir / relative_to)
raise BuildkitAbort()
cmd1 = (binary, 'x', str(archive_path), '-so')
cmd2 = (binary, 'x', '-si', '-aoa', '-ttar', '-o{}'.format(str(out_dir)))
get_logger().debug('7z command line: %s | %s',
' '.join(cmd1), ' '.join(cmd2))
cmd2 = (binary, 'x', '-si', '-aoa', '-ttar', '-o{}'.format(str(output_dir)))
get_logger().debug('7z command line: %s | %s', ' '.join(cmd1), ' '.join(cmd2))
proc1 = subprocess.Popen(cmd1, stdout=subprocess.PIPE)
proc2 = subprocess.Popen(cmd2, stdin=proc1.stdout, stdout=subprocess.PIPE)
@@ -103,16 +105,13 @@ def _extract_tar_with_7z(binary, archive_path, buildspace_tree, unpack_dir, igno
raise BuildkitAbort()
if not relative_to is None:
_process_relative_to(out_dir, relative_to)
_process_relative_to(output_dir, relative_to)
_prune_tree(out_dir, ignore_files)
def _extract_tar_with_tar(binary, archive_path, buildspace_tree, unpack_dir, #pylint: disable=too-many-arguments
ignore_files, relative_to):
def _extract_tar_with_tar(binary, archive_path, output_dir, relative_to):
get_logger().debug('Using BSD or GNU tar extractor')
out_dir = buildspace_tree / unpack_dir
out_dir.mkdir(exist_ok=True)
cmd = (binary, '-xf', str(archive_path), '-C', str(out_dir))
output_dir.mkdir(exist_ok=True)
cmd = (binary, '-xf', str(archive_path), '-C', str(output_dir))
get_logger().debug('tar command line: %s', ' '.join(cmd))
result = subprocess.run(cmd)
if result.returncode != 0:
@@ -122,14 +121,15 @@ def _extract_tar_with_tar(binary, archive_path, buildspace_tree, unpack_dir, #py
# for gnu tar, the --transform option could be used. but to keep compatibility with
# bsdtar on macos, we just do this ourselves
if not relative_to is None:
_process_relative_to(out_dir, relative_to)
_process_relative_to(output_dir, relative_to)
_prune_tree(out_dir, ignore_files)
def _extract_tar_with_python(archive_path, buildspace_tree, unpack_dir, ignore_files, relative_to):
def _extract_tar_with_python(archive_path, output_dir, relative_to):
get_logger().debug('Using pure Python tar extractor')
class NoAppendList(list):
"""Hack to workaround memory issues with large tar files"""
def append(self, obj):
pass
@@ -148,57 +148,51 @@ def _extract_tar_with_python(archive_path, buildspace_tree, unpack_dir, ignore_f
get_logger().exception('Unexpected exception during symlink support check.')
raise BuildkitAbort()
with tarfile.open(str(archive_path)) as tar_file_obj:
with tarfile.open(str(archive_path), 'r|%s' % archive_path.suffix[1:]) as tar_file_obj:
tar_file_obj.members = NoAppendList()
for tarinfo in tar_file_obj:
try:
if relative_to is None:
tree_relative_path = unpack_dir / PurePosixPath(tarinfo.name)
destination = output_dir / PurePosixPath(tarinfo.name)
else:
tree_relative_path = unpack_dir / PurePosixPath(tarinfo.name).relative_to(
relative_to)
try:
ignore_files.remove(tree_relative_path.as_posix())
except KeyError:
destination = buildspace_tree / tree_relative_path
if tarinfo.issym() and not symlink_supported:
# In this situation, TarFile.makelink() will try to create a copy of the
# target. But this fails because TarFile.members is empty
# But if symlinks are not supported, it's safe to assume that symlinks
# aren't needed. The only situation where this happens is on Windows.
continue
if tarinfo.islnk():
# Derived from TarFile.extract()
new_target = buildspace_tree / unpack_dir / PurePosixPath(
tarinfo.linkname).relative_to(relative_to)
tarinfo._link_target = new_target.as_posix() # pylint: disable=protected-access
if destination.is_symlink():
destination.unlink()
tar_file_obj._extract_member(tarinfo, str(destination)) # pylint: disable=protected-access
destination = output_dir / PurePosixPath(tarinfo.name).relative_to(relative_to)
if tarinfo.issym() and not symlink_supported:
# In this situation, TarFile.makelink() will try to create a copy of the
# target. But this fails because TarFile.members is empty
# But if symlinks are not supported, it's safe to assume that symlinks
# aren't needed. The only situation where this happens is on Windows.
continue
if tarinfo.islnk():
# Derived from TarFile.extract()
new_target = output_dir / PurePosixPath(
tarinfo.linkname).relative_to(relative_to)
tarinfo._link_target = new_target.as_posix() # pylint: disable=protected-access
if destination.is_symlink():
destination.unlink()
tar_file_obj._extract_member(tarinfo, str(destination)) # pylint: disable=protected-access
except BaseException:
get_logger().exception('Exception thrown for tar member: %s', tarinfo.name)
raise BuildkitAbort()
def extract_tar_file(archive_path, buildspace_tree, unpack_dir, ignore_files, relative_to, #pylint: disable=too-many-arguments
extractors=None):
def extract_tar_file(
archive_path,
output_dir,
relative_to, #pylint: disable=too-many-arguments
extractors=None):
"""
Extract regular or compressed tar archive into the buildspace tree.
Extract regular or compressed tar archive into the output directory.
archive_path is the pathlib.Path to the archive to unpack
buildspace_tree is a pathlib.Path to the buildspace tree.
unpack_dir is a pathlib.Path relative to buildspace_tree to unpack the archive.
It must already exist.
output_dir is a pathlib.Path to the directory to unpack. It must already exist.
ignore_files is a set of paths as strings that should not be extracted from the archive.
Files that have been ignored are removed from the set.
relative_to is a pathlib.Path for directories that should be stripped relative to the
root of the archive.
root of the archive, or None if no path components should be stripped.
extractors is a dictionary of PlatformEnum to a command or path to the
extractor binary. Defaults to 'tar' for tar, and '_use_registry' for 7-Zip.
extractor binary. Defaults to 'tar' for tar, and '_use_registry' for 7-Zip.
Raises BuildkitAbort if unexpected issues arise during unpacking.
"""
resolved_tree = buildspace_tree.resolve()
if extractors is None:
extractors = DEFAULT_EXTRACTORS
@@ -209,39 +203,33 @@ def extract_tar_file(archive_path, buildspace_tree, unpack_dir, ignore_files, re
sevenzip_cmd = str(_find_7z_by_registry())
sevenzip_bin = _find_extractor_by_cmd(sevenzip_cmd)
if not sevenzip_bin is None:
_extract_tar_with_7z(
binary=sevenzip_bin, archive_path=archive_path, buildspace_tree=resolved_tree,
unpack_dir=unpack_dir, ignore_files=ignore_files, relative_to=relative_to)
_extract_tar_with_7z(sevenzip_bin, archive_path, output_dir, relative_to)
return
elif current_platform == PlatformEnum.UNIX:
# NOTE: 7-zip isn't an option because it doesn't preserve file permissions
tar_bin = _find_extractor_by_cmd(extractors.get(ExtractorEnum.TAR))
if not tar_bin is None:
_extract_tar_with_tar(
binary=tar_bin, archive_path=archive_path, buildspace_tree=resolved_tree,
unpack_dir=unpack_dir, ignore_files=ignore_files, relative_to=relative_to)
_extract_tar_with_tar(tar_bin, archive_path, output_dir, relative_to)
return
else:
# This is not a normal code path, so make it clear.
raise NotImplementedError(current_platform)
# Fallback to Python-based extractor on all platforms
_extract_tar_with_python(
archive_path=archive_path, buildspace_tree=resolved_tree, unpack_dir=unpack_dir,
ignore_files=ignore_files, relative_to=relative_to)
_extract_tar_with_python(archive_path, output_dir, relative_to)
def extract_with_7z(archive_path, buildspace_tree, unpack_dir, ignore_files, relative_to, #pylint: disable=too-many-arguments
extractors=None):
def extract_with_7z(
archive_path,
output_dir,
relative_to, #pylint: disable=too-many-arguments
extractors=None):
"""
Extract archives with 7-zip into the buildspace tree.
Extract archives with 7-zip into the output directory.
Only supports archives with one layer of unpacking, so compressed tar archives don't work.
archive_path is the pathlib.Path to the archive to unpack
buildspace_tree is a pathlib.Path to the buildspace tree.
unpack_dir is a pathlib.Path relative to buildspace_tree to unpack the archive.
It must already exist.
output_dir is a pathlib.Path to the directory to unpack. It must already exist.
ignore_files is a set of paths as strings that should not be extracted from the archive.
Files that have been ignored are removed from the set.
relative_to is a pathlib.Path for directories that should be stripped relative to the
root of the archive.
extractors is a dictionary of PlatformEnum to a command or path to the
@@ -260,14 +248,12 @@ def extract_with_7z(archive_path, buildspace_tree, unpack_dir, ignore_files, rel
raise BuildkitAbort()
sevenzip_cmd = str(_find_7z_by_registry())
sevenzip_bin = _find_extractor_by_cmd(sevenzip_cmd)
resolved_tree = buildspace_tree.resolve()
out_dir = resolved_tree / unpack_dir
if not relative_to is None and (out_dir / relative_to).exists():
get_logger().error(
'Temporary unpacking directory already exists: %s', out_dir / relative_to)
if not relative_to is None and (output_dir / relative_to).exists():
get_logger().error('Temporary unpacking directory already exists: %s',
output_dir / relative_to)
raise BuildkitAbort()
cmd = (sevenzip_bin, 'x', str(archive_path), '-aoa', '-o{}'.format(str(out_dir)))
cmd = (sevenzip_bin, 'x', str(archive_path), '-aoa', '-o{}'.format(str(output_dir)))
get_logger().debug('7z command line: %s', ' '.join(cmd))
result = subprocess.run(cmd)
@@ -276,6 +262,4 @@ def extract_with_7z(archive_path, buildspace_tree, unpack_dir, ignore_files, rel
raise BuildkitAbort()
if not relative_to is None:
_process_relative_to(out_dir, relative_to)
_prune_tree(out_dir, ignore_files)
_process_relative_to(output_dir, relative_to)

View File

@@ -1,79 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Common code for build files generators"""
import hashlib
import re
import string
import subprocess
import urllib.request
from pathlib import Path
from ..common import ENCODING, BuildkitAbort, get_logger
# Constants
SHARED_PACKAGING = 'shared'
PROCESS_BUILD_OUTPUTS = 'process_build_outputs.py'
APPLY_PATCH_SERIES = 'apply_patch_series.py'
DEFAULT_BUILD_OUTPUT = Path('out/Default')
# Classes
class BuildFileStringTemplate(string.Template):
"""
Custom string substitution class
Inspired by
http://stackoverflow.com/questions/12768107/string-substitutions-using-templates-in-python
"""
pattern = r"""
{delim}(?:
(?P<escaped>{delim}) |
_(?P<named>{id}) |
{{(?P<braced>{id})}} |
(?P<invalid>{delim}((?!_)|(?!{{)))
)
""".format(delim=re.escape("$ungoog"), id=string.Template.idpattern)
# Methods
def process_templates(root_dir, build_file_subs):
"""Substitute '$ungoog' strings in '.in' template files and remove the suffix"""
for old_path in root_dir.glob('*.in'):
new_path = root_dir / old_path.stem
old_path.replace(new_path)
with new_path.open('r+', encoding=ENCODING) as new_file:
content = BuildFileStringTemplate(new_file.read()).substitute(
**build_file_subs)
new_file.seek(0)
new_file.write(content)
new_file.truncate()
def get_current_commit():
"""
Returns a string of the current commit hash.
It assumes "git" is in PATH, and that buildkit is run within a git repository.
Raises BuildkitAbort if invoking git fails.
"""
result = subprocess.run(['git', 'rev-parse', '--verify', 'HEAD'],
stdout=subprocess.PIPE, universal_newlines=True,
cwd=str(Path(__file__).resolve().parent))
if result.returncode:
get_logger().error('Unexpected return code %s', result.returncode)
get_logger().error('Command output: %s', result.stdout)
raise BuildkitAbort()
return result.stdout.strip('\n')
def get_remote_file_hash(url, hash_type='sha256'):
"""Downloads and returns a hash of a file at the given url"""
with urllib.request.urlopen(url) as file_obj:
return hashlib.new(hash_type, file_obj.read()).hexdigest()

View File

@@ -1,88 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2017 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Arch Linux-specific build files generation code"""
from ..common import ENCODING, PACKAGING_DIR, BuildkitAbort, get_resources_dir, get_logger
from ._common import (
DEFAULT_BUILD_OUTPUT, SHARED_PACKAGING, BuildFileStringTemplate,
get_current_commit, get_remote_file_hash)
# Private definitions
# PKGBUILD constants
_FLAGS_INDENTATION = 4
_REPO_URL_TEMPLATE = 'https://github.com/Eloston/ungoogled-chromium/archive/{}.tar.gz'
def _get_packaging_resources(shared=False):
if shared:
return get_resources_dir() / PACKAGING_DIR / SHARED_PACKAGING
return get_resources_dir() / PACKAGING_DIR / 'archlinux'
def _generate_gn_flags(flags_items_iter):
"""Returns GN flags for the PKGBUILD"""
indentation = ' ' * _FLAGS_INDENTATION
return '\n'.join(map(lambda x: indentation + "'{}={}'".format(*x), flags_items_iter))
# Public definitions
def generate_packaging(config_bundle, output_dir, repo_version='bundle',
repo_hash='SKIP', build_output=DEFAULT_BUILD_OUTPUT):
"""
Generates an Arch Linux PKGBUILD into output_dir
config_bundle is the config.ConfigBundle to use for configuration
output_dir is the pathlib.Path to a directory that will contain the PKGBUILD.
repo_version is a string that specifies the ungoogled-chromium repository to
download for use within the PKGBUILD. The string 'bundle' causes the use of
config_bundle's version config file, and 'git' uses the current commit hash
from git (which assumes "git" is in PATH, and that buildkit is run within a
git repository).
repo_hash is a string specifying the SHA-256 to verify the archive of
the ungoogled-chromium repository to download within the PKGBUILD. If it is
'compute', the archive is downloaded to memory and a hash is computed. If it
is 'SKIP', hash computation is skipped in the PKGBUILD.
build_output is a pathlib.Path for building intermediates and outputs to be stored
Raises FileExistsError if a file named PKGBUILD already exists in output_dir
Raises FileNotFoundError if output_dir is not an existing directory.
"""
if repo_version == 'bundle':
repo_version = config_bundle.version.version_string
elif repo_version == 'git':
repo_version = get_current_commit()
repo_url = _REPO_URL_TEMPLATE.format(repo_version)
if repo_hash == 'compute':
get_logger().debug('Downloading archive into memory for hash computation...')
repo_hash = get_remote_file_hash(repo_url)
get_logger().debug('Computed hash: %s', repo_hash)
elif repo_hash == 'SKIP':
pass # Allow skipping of hash verification
elif len(repo_hash) != 64: # Length of hex representation of SHA-256 hash
get_logger().error('Invalid repo_hash value: %s', repo_hash)
raise BuildkitAbort()
build_file_subs = dict(
chromium_version=config_bundle.version.chromium_version,
release_revision=config_bundle.version.release_revision,
repo_url=repo_url,
repo_version=repo_version,
repo_hash=repo_hash,
build_output=build_output,
gn_flags=_generate_gn_flags(sorted(config_bundle.gn_flags.items())),
)
if not output_dir.is_dir():
raise FileNotFoundError(output_dir)
pkgbuild_path = output_dir / 'PKGBUILD'
if pkgbuild_path.exists():
raise FileExistsError(pkgbuild_path)
# Generate PKGBUILD
with (_get_packaging_resources() / 'PKGBUILD.in').open(encoding=ENCODING) as file_obj:
content = BuildFileStringTemplate(file_obj.read()).substitute(
**build_file_subs)
with pkgbuild_path.open('w', encoding=ENCODING) as file_obj:
file_obj.write(content)

View File

@@ -1,178 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Debian-specific build files generation code"""
import locale
import datetime
import os
import shutil
from ..third_party import schema
from ..common import PACKAGING_DIR, PATCHES_DIR, get_resources_dir, ensure_empty_dir
from ..config import RequiredConfigMixin, IniConfigFile, schema_inisections, schema_dictcast
from ._common import DEFAULT_BUILD_OUTPUT, process_templates
# Private definitions
_DEPENDENCIES_INI = 'dependencies.ini'
class _DependenciesIni(RequiredConfigMixin, IniConfigFile):
_schema = schema.Schema(schema_inisections({
schema.And(str, len): schema_dictcast({
'parent': schema.And(str, len),
}),
}))
def get_parent(self, name):
"""
Returns the parent name for the given flavor, or None if there is no parent.
"""
try:
return self._config_data[name]['parent']
except KeyError:
return None
def _get_packaging_resources():
return get_resources_dir() / PACKAGING_DIR / 'debian'
def _traverse_directory(directory):
"""Traversal of an entire directory tree in random order"""
iterator_stack = list()
iterator_stack.append(directory.iterdir())
while iterator_stack:
current_iter = iterator_stack.pop()
for path in current_iter:
yield path
if path.is_dir():
iterator_stack.append(current_iter)
iterator_stack.append(path.iterdir())
break
class _Flavor:
"""
Represents a Debian packaging flavor
"""
_loaded_flavors = dict()
_flavor_tree = None
def __new__(cls, name):
if name in cls._loaded_flavors:
return cls._loaded_flavors[name]
return super().__new__(cls)
def __init__(self, name):
if name not in self._loaded_flavors:
self._loaded_flavors[name] = self
self.name = name
self.path = _get_packaging_resources() / name
if not self.path.is_dir():
raise ValueError("Not an existing flavor: '{}'".format(name))
def __str__(self):
return "<Flavor: {}>".format(str(self.path))
def __repr__(self):
return str(self)
@classmethod
def _get_parent_name(cls, child):
if not cls._flavor_tree:
cls._flavor_tree = _DependenciesIni(_get_packaging_resources() / _DEPENDENCIES_INI)
return cls._flavor_tree.get_parent(child)
@property
def parent(self):
"""
Returns the Flavor object that this inherits from.
Returns None if there is no parent
"""
parent_name = self._get_parent_name(self.name)
if parent_name:
return _Flavor(parent_name)
return None
def _resolve_file_flavors(self):
file_flavor_resolutions = dict()
current_flavor = self
while current_flavor:
for path in _traverse_directory(current_flavor.path):
rel_path = path.relative_to(current_flavor.path)
if rel_path not in file_flavor_resolutions:
file_flavor_resolutions[rel_path] = current_flavor
current_flavor = current_flavor.parent
return sorted(file_flavor_resolutions.items())
def assemble_files(self, destination):
"""
Copies all files associated with this flavor to `destination`
"""
for rel_path, flavor in self._resolve_file_flavors():
source_path = flavor.path / rel_path
dest_path = destination / rel_path
if source_path.is_dir():
dest_path.mkdir()
shutil.copymode(str(source_path), str(dest_path), follow_symlinks=False)
else:
shutil.copy(str(source_path), str(dest_path), follow_symlinks=False)
def _get_dpkg_changelog_datetime(override_datetime=None):
if override_datetime is None:
current_datetime = datetime.date.today()
else:
current_datetime = override_datetime
current_lc = locale.setlocale(locale.LC_TIME)
try:
# Setting the locale is bad practice, but datetime.strftime requires it
locale.setlocale(locale.LC_TIME, "C")
result = current_datetime.strftime("%a, %d %b %Y %H:%M:%S ")
timezone = current_datetime.strftime("%z")
if not timezone:
timezone = "+0000"
return result + timezone
finally:
locale.setlocale(locale.LC_TIME, current_lc)
def _escape_string(value):
return value.replace('"', '\\"')
def _get_parsed_gn_flags(gn_flags):
def _shell_line_generator(gn_flags):
for key, value in gn_flags.items():
yield "defines+=" + _escape_string(key) + "=" + _escape_string(value)
return os.linesep.join(_shell_line_generator(gn_flags))
# Public definitions
def generate_packaging(config_bundle, flavor, debian_dir,
build_output=DEFAULT_BUILD_OUTPUT, distro_version='stable'):
"""
Generates a debian directory in the buildspace tree
config_bundle is a config.ConfigBundle to use for configuration
flavor is a Debian packaging flavor name to use
debian_dir is a pathlib.Path to the Debian directory to be created.
build_output is the pathlib.Path for building intermediates and outputs to be stored
distro_version is the distribution version name to use in debian/changelog
Raises FileExistsError if debian_dir already exists and is not empty.
Raises FileNotFoundError if the parent directories for debian_dir do not exist.
"""
# Use config_bundle.version.version_string for Debian version string
build_file_subs = dict(
changelog_version=config_bundle.version.version_string,
changelog_datetime=_get_dpkg_changelog_datetime(),
build_output=build_output,
distribution_version=distro_version,
gn_flags=_get_parsed_gn_flags(config_bundle.gn_flags)
)
ensure_empty_dir(debian_dir) # Raises FileNotFoundError, FileExistsError
_Flavor(flavor).assemble_files(debian_dir)
process_templates(debian_dir, build_file_subs)
config_bundle.patches.export_patches(debian_dir / PATCHES_DIR)

View File

@@ -1,63 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2017 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Linux Simple-specific build files generation code"""
import shutil
from ..common import PACKAGING_DIR, PATCHES_DIR, get_resources_dir, ensure_empty_dir
from ._common import (
DEFAULT_BUILD_OUTPUT, SHARED_PACKAGING, PROCESS_BUILD_OUTPUTS, APPLY_PATCH_SERIES,
process_templates)
# Private definitions
def _get_packaging_resources(shared=False):
if shared:
return get_resources_dir() / PACKAGING_DIR / SHARED_PACKAGING
return get_resources_dir() / PACKAGING_DIR / 'linux_simple'
def _copy_from_resources(name, output_dir, shared=False):
shutil.copy(
str(_get_packaging_resources(shared=shared) / name),
str(output_dir / name))
# Public definitions
def generate_packaging(config_bundle, output_dir, build_output=DEFAULT_BUILD_OUTPUT):
"""
Generates the linux_simple packaging into output_dir
config_bundle is the config.ConfigBundle to use for configuration
output_dir is the pathlib.Path directory that will be created to contain packaging files
build_output is a pathlib.Path for building intermediates and outputs to be stored
Raises FileExistsError if output_dir already exists and is not empty.
Raises FileNotFoundError if the parent directories for output_dir do not exist.
"""
build_file_subs = dict(
build_output=build_output,
gn_args_string=' '.join(
'{}={}'.format(flag, value) for flag, value in config_bundle.gn_flags.items()),
version_string=config_bundle.version.version_string
)
ensure_empty_dir(output_dir) # Raises FileNotFoundError, FileExistsError
(output_dir / 'scripts').mkdir()
(output_dir / 'archive_include').mkdir()
# Build and packaging scripts
_copy_from_resources('build.sh.in', output_dir)
_copy_from_resources('package.sh.in', output_dir)
_copy_from_resources(PROCESS_BUILD_OUTPUTS, output_dir / 'scripts', shared=True)
_copy_from_resources(APPLY_PATCH_SERIES, output_dir / 'scripts', shared=True)
process_templates(output_dir, build_file_subs)
# Other resources to package
_copy_from_resources('README', output_dir / 'archive_include')
# Patches
config_bundle.patches.export_patches(output_dir / PATCHES_DIR)

View File

@@ -1,56 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2017 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""macOS-specific build files generation code"""
import shutil
from ..common import PACKAGING_DIR, PATCHES_DIR, get_resources_dir, ensure_empty_dir
from ._common import DEFAULT_BUILD_OUTPUT, SHARED_PACKAGING, APPLY_PATCH_SERIES, process_templates
# Private definitions
def _get_packaging_resources(shared=False):
if shared:
return get_resources_dir() / PACKAGING_DIR / SHARED_PACKAGING
return get_resources_dir() / PACKAGING_DIR / 'macos'
def _copy_from_resources(name, output_dir, shared=False):
shutil.copy(
str(_get_packaging_resources(shared=shared) / name),
str(output_dir / name))
# Public definitions
def generate_packaging(config_bundle, output_dir, build_output=DEFAULT_BUILD_OUTPUT):
"""
Generates the macOS packaging into output_dir
config_bundle is the config.ConfigBundle to use for configuration
output_dir is the pathlib.Path directory that will be created to contain packaging files
build_output is a pathlib.Path for building intermediates and outputs to be stored
Raises FileExistsError if output_dir already exists and is not empty.
Raises FileNotFoundError if the parent directories for output_dir do not exist.
"""
build_file_subs = dict(
build_output=build_output,
gn_args_string=' '.join(
'{}={}'.format(flag, value) for flag, value in config_bundle.gn_flags.items()),
version_string=config_bundle.version.version_string
)
ensure_empty_dir(output_dir) # Raises FileNotFoundError, FileExistsError
(output_dir / 'scripts').mkdir()
_copy_from_resources(APPLY_PATCH_SERIES, output_dir / 'scripts', shared=True)
# Build script
_copy_from_resources('build.sh.in', output_dir)
process_templates(output_dir, build_file_subs)
# Patches
config_bundle.patches.export_patches(output_dir / PATCHES_DIR)

View File

@@ -1,104 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2017 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""OpenSUSE-specific build files generation code"""
import os
import shutil
from ..common import PACKAGING_DIR, PATCHES_DIR, get_resources_dir, ensure_empty_dir
from ._common import (
ENCODING, DEFAULT_BUILD_OUTPUT, SHARED_PACKAGING, PROCESS_BUILD_OUTPUTS, process_templates)
# Private definitions
def _get_packaging_resources(shared=False):
if shared:
return get_resources_dir() / PACKAGING_DIR / SHARED_PACKAGING
return get_resources_dir() / PACKAGING_DIR / 'opensuse'
def _copy_from_resources(name, output_dir, shared=False):
shutil.copy(
str(_get_packaging_resources(shared=shared) / name),
str(output_dir / name))
def _copy_tree_from_resources(name, output_dir, output_dir_name, shared=False):
shutil.copytree(
str(_get_packaging_resources(shared=shared) / name),
str(output_dir / output_dir_name))
def _escape_string(value):
return value.replace('"', '\\"')
def _get_parsed_gn_flags(gn_flags):
def _shell_line_generator(gn_flags):
for key, value in gn_flags.items():
yield "myconf_gn+=\" " + _escape_string(key) + "=" + _escape_string(value) + "\""
return os.linesep.join(_shell_line_generator(gn_flags))
def _get_spec_format_patch_series(series_path):
patch_string = ''
patch_list = []
with series_path.open(encoding=ENCODING) as series_file:
patch_list = series_file.readlines()
i = 1
for patch_file in patch_list:
last_slash_pos = patch_file.rfind('/')
patch_file = patch_file[last_slash_pos + 1:]
patch_string += 'Patch{0}: {1}'.format(i, patch_file)
i += 1
return {'patchString': patch_string, 'numPatches': len(patch_list)}
def _get_patch_apply_spec_cmd(num_patches):
patch_apply_string = ''
for i in range(1, num_patches + 1):
patch_apply_string += '%patch{0} -p1\n'.format(i)
return patch_apply_string
# Public definitions
def generate_packaging(config_bundle, output_dir, build_output=DEFAULT_BUILD_OUTPUT):
"""
Generates the opensuse packaging into output_dir
config_bundle is the config.ConfigBundle to use for configuration
output_dir is the pathlib.Path directory that will be created to contain packaging files
build_output is a pathlib.Path for building intermediates and outputs to be stored
Raises FileExistsError if output_dir already exists and is not empty.
Raises FileNotFoundError if the parent directories for output_dir do not exist.
"""
ensure_empty_dir(output_dir) # Raises FileNotFoundError, FileExistsError
(output_dir / 'scripts').mkdir()
(output_dir / 'archive_include').mkdir()
# Patches
config_bundle.patches.export_patches(output_dir / PATCHES_DIR)
patch_info = _get_spec_format_patch_series(output_dir / PATCHES_DIR / 'series')
build_file_subs = dict(
build_output=build_output,
gn_flags=_get_parsed_gn_flags(config_bundle.gn_flags),
gn_args_string=' '.join(
'{}={}'.format(flag, value) for flag, value in config_bundle.gn_flags.items()),
numbered_patch_list=patch_info['patchString'],
apply_patches_cmd=_get_patch_apply_spec_cmd(patch_info['numPatches']),
chromium_version=config_bundle.version.chromium_version,
release_revision=config_bundle.version.release_revision
)
# Build and packaging scripts
_copy_from_resources('setup.sh.in', output_dir)
_copy_from_resources('ungoogled-chromium.spec.in', output_dir)
_copy_from_resources(PROCESS_BUILD_OUTPUTS, output_dir / 'scripts', shared=True)
process_templates(output_dir, build_file_subs)
# Other resources to package
_copy_from_resources('README', output_dir / 'archive_include')
_copy_tree_from_resources('chromium-icons_contents', output_dir, 'chromium-icons_contents')
_copy_tree_from_resources('sources_template', output_dir, 'SOURCES')

View File

@@ -1,60 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2017 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Microsoft Windows-specific build files generation code"""
import shutil
from ..common import PACKAGING_DIR, PATCHES_DIR, get_resources_dir, ensure_empty_dir
from ._common import (
DEFAULT_BUILD_OUTPUT, SHARED_PACKAGING, PROCESS_BUILD_OUTPUTS, APPLY_PATCH_SERIES,
process_templates)
# Private definitions
def _get_packaging_resources(shared=False):
if shared:
return get_resources_dir() / PACKAGING_DIR / SHARED_PACKAGING
return get_resources_dir() / PACKAGING_DIR / 'windows'
def _copy_from_resources(name, output_dir, shared=False):
shutil.copy(
str(_get_packaging_resources(shared=shared) / name),
str(output_dir / name))
# Public definitions
def generate_packaging(config_bundle, output_dir, build_output=DEFAULT_BUILD_OUTPUT):
"""
Generates the windows packaging into output_dir
config_bundle is the config.ConfigBundle to use for configuration
output_dir is the pathlib.Path directory that will be created to contain packaging files
build_output is a pathlib.Path for building intermediates and outputs to be stored
Raises FileExistsError if output_dir already exists and is not empty.
Raises FileNotFoundError if the parent directories for output_dir do not exist.
"""
build_file_subs = dict(
build_output=build_output,
version_string=config_bundle.version.version_string
)
ensure_empty_dir(output_dir) # Raises FileNotFoundError, FileExistsError
(output_dir / 'scripts').mkdir()
# Build and packaging scripts
_copy_from_resources('build.bat.in', output_dir)
_copy_from_resources('package.bat.in', output_dir)
_copy_from_resources(PROCESS_BUILD_OUTPUTS, output_dir / 'scripts', shared=True)
_copy_from_resources(APPLY_PATCH_SERIES, output_dir / 'scripts', shared=True)
process_templates(output_dir, build_file_subs)
# GN flags
config_bundle.gn_flags.write(output_dir / 'args.gn')
# Patches to apply via quilt
config_bundle.patches.export_patches(output_dir / PATCHES_DIR)

95
buildkit/patches.py Normal file
View File

@@ -0,0 +1,95 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Utilities for reading and copying patches"""
import shutil
import subprocess
from pathlib import Path
from .common import ENCODING, get_logger, ensure_empty_dir
# Default patches/ directory is next to buildkit
DEFAULT_PATCH_DIR = Path(__file__).absolute().parent.parent / 'patches'
def patch_paths_by_bundle(config_bundle, patch_dir=DEFAULT_PATCH_DIR):
"""
Returns an iterator of pathlib.Path to patch files in the proper order
config_bundle is a config.ConfigBundle with the patch order to use
patch_dir is the path to the patches/ directory
Raises NotADirectoryError if patch_dir is not a directory or does not exist
"""
if not patch_dir.is_dir():
raise NotADirectoryError(str(patch_dir))
for relative_path in config_bundle.patch_order:
yield patch_dir / relative_path
def export_patches(config_bundle, path, series=Path('series'), patch_dir=DEFAULT_PATCH_DIR):
"""
Writes patches and a series file to the directory specified by path.
This is useful for writing a quilt-compatible patches directory and series file.
config_bundle is a config.ConfigBundle with the patch order to use
path is a pathlib.Path to the patches directory to create. It must not already exist.
series is a pathlib.Path to the series file, relative to path.
patch_dir is the path to the patches/ directory
Raises FileExistsError if path already exists and is not empty.
Raises FileNotFoundError if the parent directories for path do not exist.
Raises NotADirectoryError if patch_dir is not a directory or does not exist
"""
ensure_empty_dir(path) # Raises FileExistsError, FileNotFoundError
if not patch_dir.is_dir():
raise NotADirectoryError(str(patch_dir))
for relative_path in config_bundle.patch_order:
destination = path / relative_path
destination.parent.mkdir(parents=True, exist_ok=True)
shutil.copyfile(str(patch_dir / relative_path), str(destination))
with (path / series).open('w', encoding=ENCODING) as file_obj:
file_obj.write(str(config_bundle.patch_order))
def apply_patches(patch_path_iter, tree_path, reverse=False, patch_bin_path=None):
"""
Applies or reverses a list of patches
tree_path is the pathlib.Path of the source tree to patch
patch_path_iter is a list or tuple of pathlib.Path to patch files to apply
reverse is whether the patches should be reversed
patch_bin_path is the pathlib.Path of the patch binary, or None to find it automatically
On Windows, this will look for the binary in third_party/git/usr/bin/patch.exe
On other platforms, this will search the PATH environment variable for "patch"
Raises ValueError if the patch binary could not be found.
"""
patch_paths = list(patch_path_iter)
if patch_bin_path is None:
windows_patch_bin_path = (tree_path / 'third_party' / 'git' / 'usr' / 'bin' / 'patch.exe')
patch_bin_path = Path(shutil.which('patch') or windows_patch_bin_path)
if not patch_bin_path.exists():
raise ValueError('Could not find the patch binary')
if reverse:
patch_paths.reverse()
logger = get_logger()
for patch_path, patch_num in zip(patch_paths, range(1, len(patch_paths) + 1)):
cmd = [
str(patch_bin_path), '-p1', '--ignore-whitespace', '-i',
str(patch_path), '-d',
str(tree_path), '--no-backup-if-mismatch'
]
if reverse:
cmd.append('--reverse')
log_word = 'Reversing'
else:
cmd.append('--forward')
log_word = 'Applying'
logger.info('* %s %s (%s/%s)', log_word, patch_path.name, patch_num, len(patch_paths))
logger.debug(' '.join(cmd))
subprocess.run(cmd, check=True)

View File

@@ -1,234 +0,0 @@
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Module for the downloading, checking, and unpacking of necessary files into the buildspace tree
"""
import urllib.request
import hashlib
from pathlib import Path
from .common import (
ENCODING, ExtractorEnum, get_logger, ensure_empty_dir)
from .extraction import extract_tar_file, extract_with_7z
# Constants
_SOURCE_ARCHIVE_URL = ('https://commondatastorage.googleapis.com/'
'chromium-browser-official/chromium-{}.tar.xz')
_SOURCE_HASHES_URL = _SOURCE_ARCHIVE_URL + '.hashes'
# Custom Exceptions
class NotAFileError(OSError):
"""Exception for paths expected to be regular files"""
pass
class HashMismatchError(Exception):
"""Exception for computed hashes not matching expected hashes"""
pass
class _UrlRetrieveReportHook: #pylint: disable=too-few-public-methods
"""Hook for urllib.request.urlretrieve to log progress information to console"""
def __init__(self):
self._max_len_printed = 0
self._last_percentage = None
def __call__(self, block_count, block_size, total_size):
downloaded_estimate = block_count * block_size
percentage = round(downloaded_estimate / total_size, ndigits=3)
if percentage == self._last_percentage:
return # Do not needlessly update the console
self._last_percentage = percentage
print('\r' + ' ' * self._max_len_printed, end='')
if total_size > 0:
status_line = 'Progress: {:.1%} of {:,d} B'.format(percentage, total_size)
else:
status_line = 'Progress: {:,d} B of unknown size'.format(downloaded_estimate)
self._max_len_printed = len(status_line)
print('\r' + status_line, end='')
def _download_if_needed(file_path, url, show_progress):
"""
Downloads a file from url to the specified path file_path if necessary.
If show_progress is True, download progress is printed to the console.
Raises source_retrieval.NotAFileError when the destination exists but is not a file.
"""
if file_path.exists() and not file_path.is_file():
raise NotAFileError(file_path)
elif not file_path.exists():
get_logger().info('Downloading %s ...', file_path)
reporthook = None
if show_progress:
reporthook = _UrlRetrieveReportHook()
urllib.request.urlretrieve(url, str(file_path), reporthook=reporthook)
if show_progress:
print()
else:
get_logger().info('%s already exists. Skipping download.', file_path)
def _chromium_hashes_generator(hashes_path):
with hashes_path.open(encoding=ENCODING) as hashes_file:
hash_lines = hashes_file.read().splitlines()
for hash_name, hash_hex, _ in map(lambda x: x.lower().split(' '), hash_lines):
if hash_name in hashlib.algorithms_available:
yield hash_name, hash_hex
else:
get_logger().warning('Skipping unknown hash algorithm: %s', hash_name)
def _setup_chromium_source(config_bundle, buildspace_downloads, buildspace_tree, #pylint: disable=too-many-arguments
show_progress, pruning_set, extractors=None):
"""
Download, check, and extract the Chromium source code into the buildspace tree.
Arguments of the same name are shared with retreive_and_extract().
pruning_set is a set of files to be pruned. Only the files that are ignored during
extraction are removed from the set.
extractors is a dictionary of PlatformEnum to a command or path to the
extractor binary. Defaults to 'tar' for tar, and '_use_registry' for 7-Zip.
Raises source_retrieval.HashMismatchError when the computed and expected hashes do not match.
Raises source_retrieval.NotAFileError when the archive name exists but is not a file.
May raise undetermined exceptions during archive unpacking.
"""
source_archive = buildspace_downloads / 'chromium-{}.tar.xz'.format(
config_bundle.version.chromium_version)
source_hashes = source_archive.with_name(source_archive.name + '.hashes')
if source_archive.exists() and not source_archive.is_file():
raise NotAFileError(source_archive)
if source_hashes.exists() and not source_hashes.is_file():
raise NotAFileError(source_hashes)
get_logger().info('Downloading Chromium source code...')
_download_if_needed(
source_archive,
_SOURCE_ARCHIVE_URL.format(config_bundle.version.chromium_version),
show_progress)
_download_if_needed(
source_hashes,
_SOURCE_HASHES_URL.format(config_bundle.version.chromium_version),
False)
get_logger().info('Verifying hashes...')
with source_archive.open('rb') as file_obj:
archive_data = file_obj.read()
for hash_name, hash_hex in _chromium_hashes_generator(source_hashes):
get_logger().debug('Verifying %s hash...', hash_name)
hasher = hashlib.new(hash_name, data=archive_data)
if not hasher.hexdigest().lower() == hash_hex.lower():
raise HashMismatchError(source_archive)
get_logger().info('Extracting archive...')
extract_tar_file(
archive_path=source_archive, buildspace_tree=buildspace_tree, unpack_dir=Path(),
ignore_files=pruning_set,
relative_to=Path('chromium-{}'.format(config_bundle.version.chromium_version)),
extractors=extractors)
def _setup_extra_deps(config_bundle, buildspace_downloads, buildspace_tree, show_progress, #pylint: disable=too-many-arguments,too-many-locals
pruning_set, extractors=None):
"""
Download, check, and extract extra dependencies into the buildspace tree.
Arguments of the same name are shared with retreive_and_extract().
pruning_set is a set of files to be pruned. Only the files that are ignored during
extraction are removed from the set.
extractors is a dictionary of PlatformEnum to a command or path to the
extractor binary. Defaults to 'tar' for tar, and '_use_registry' for 7-Zip.
Raises source_retrieval.HashMismatchError when the computed and expected hashes do not match.
Raises source_retrieval.NotAFileError when the archive name exists but is not a file.
May raise undetermined exceptions during archive unpacking.
"""
for dep_name in config_bundle.extra_deps:
get_logger().info('Downloading extra dependency "%s" ...', dep_name)
dep_properties = config_bundle.extra_deps[dep_name]
dep_archive = buildspace_downloads / dep_properties.download_name
_download_if_needed(dep_archive, dep_properties.url, show_progress)
get_logger().info('Verifying hashes...')
with dep_archive.open('rb') as file_obj:
archive_data = file_obj.read()
for hash_name, hash_hex in dep_properties.hashes.items():
get_logger().debug('Verifying %s hash...', hash_name)
hasher = hashlib.new(hash_name, data=archive_data)
if not hasher.hexdigest().lower() == hash_hex.lower():
raise HashMismatchError(dep_archive)
get_logger().info('Extracting to %s ...', dep_properties.output_path)
extractor_name = dep_properties.extractor or ExtractorEnum.TAR
if extractor_name == ExtractorEnum.SEVENZIP:
extractor_func = extract_with_7z
elif extractor_name == ExtractorEnum.TAR:
extractor_func = extract_tar_file
else:
# This is not a normal code path
raise NotImplementedError(extractor_name)
if dep_properties.strip_leading_dirs is None:
strip_leading_dirs_path = None
else:
strip_leading_dirs_path = Path(dep_properties.strip_leading_dirs)
extractor_func(
archive_path=dep_archive, buildspace_tree=buildspace_tree,
unpack_dir=Path(dep_properties.output_path), ignore_files=pruning_set,
relative_to=strip_leading_dirs_path, extractors=extractors)
def retrieve_and_extract(config_bundle, buildspace_downloads, buildspace_tree, #pylint: disable=too-many-arguments
prune_binaries=True, show_progress=True, extractors=None,
disable_ssl_verification=False):
"""
Downloads, checks, and unpacks the Chromium source code and extra dependencies
defined in the config bundle into the buildspace tree.
buildspace_downloads is the path to the buildspace downloads directory, and
buildspace_tree is the path to the buildspace tree.
extractors is a dictionary of PlatformEnum to a command or path to the
extractor binary. Defaults to 'tar' for tar, and '_use_registry' for 7-Zip.
disable_ssl_verification is a boolean indicating if certificate verification
should be disabled for downloads using HTTPS.
Raises FileExistsError when the buildspace tree already exists and is not empty
Raises FileNotFoundError when buildspace/downloads does not exist or through
another system operation.
Raises NotADirectoryError if buildspace/downloads is not a directory or through
another system operation.
Raises source_retrieval.NotAFileError when the archive path exists but is not a regular file.
Raises source_retrieval.HashMismatchError when the computed and expected hashes do not match.
May raise undetermined exceptions during archive unpacking.
"""
ensure_empty_dir(buildspace_tree) # FileExistsError, FileNotFoundError
if not buildspace_downloads.exists():
raise FileNotFoundError(buildspace_downloads)
if not buildspace_downloads.is_dir():
raise NotADirectoryError(buildspace_downloads)
if prune_binaries:
remaining_files = set(config_bundle.pruning)
else:
remaining_files = set()
if disable_ssl_verification:
import ssl
# TODO: Properly implement disabling SSL certificate verification
orig_https_context = ssl._create_default_https_context #pylint: disable=protected-access
ssl._create_default_https_context = ssl._create_unverified_context #pylint: disable=protected-access
try:
_setup_chromium_source(
config_bundle=config_bundle, buildspace_downloads=buildspace_downloads,
buildspace_tree=buildspace_tree, show_progress=show_progress,
pruning_set=remaining_files, extractors=extractors)
_setup_extra_deps(
config_bundle=config_bundle, buildspace_downloads=buildspace_downloads,
buildspace_tree=buildspace_tree, show_progress=show_progress,
pruning_set=remaining_files, extractors=extractors)
finally:
# Try to reduce damage of hack by reverting original HTTPS context ASAP
if disable_ssl_verification:
ssl._create_default_https_context = orig_https_context #pylint: disable=protected-access
if remaining_files:
logger = get_logger()
for path in remaining_files:
logger.warning('File not found during source pruning: %s', path)

View File

@@ -1,2 +1,2 @@
[basebundle]
[bundle]
display_name = Mixin for linux_rooted-derived types with newer system libraries

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = Arch Linux
depends = _linux_rooted_newer_mixin,linux_rooted

View File

@@ -1,2 +1,2 @@
[basebundle]
[bundle]
display_name = Common across all bundles

View File

@@ -0,0 +1,7 @@
# Official Chromium source code archive
# NOTE: Substitutions beginning with underscore are provided by buildkit
[chromium]
url = https://commondatastorage.googleapis.com/chromium-browser-official/chromium-%(_chromium_version)s.tar.xz
download_filename = chromium-%(_chromium_version)s.tar.xz
hash_url = chromium|chromium-%(_chromium_version)s.tar.xz.hashes|https://commondatastorage.googleapis.com/chromium-browser-official/chromium-%(_chromium_version)s.tar.xz.hashes
output_path = ./

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = Debian 10 (buster)
depends = _linux_rooted_newer_mixin,linux_rooted

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = Debian 9.0 (stretch)
depends = linux_rooted

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = Linux build with minimal system dependencies
depends = common

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = Shared config among system-dependent Linux configs
depends = common

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = macOS
depends = common

View File

@@ -7,7 +7,7 @@
[google-toolbox-for-mac]
version = 3c3111d3aefe907c8c0f0e933029608d96ceefeb
url = https://github.com/google/google-toolbox-for-mac/archive/%(version)s.tar.gz
download_name = google-toolbox-for-mac-%(version)s.tar.gz
download_filename = google-toolbox-for-mac-%(version)s.tar.gz
strip_leading_dirs = google-toolbox-for-mac-%(version)s
sha512 = 609b91872d123f9c5531954fad2f434a6ccf709cee8ae05f7f584c005ace511d4744a95e29ea057545ed5e882fe5d12385b6d08c88764f00cd64f7f2a0837790
output_path = third_party/google_toolbox_for_mac/src
@@ -16,7 +16,7 @@ output_path = third_party/google_toolbox_for_mac/src
[llvm]
version = 6.0.0
url = http://llvm.org/releases/%(version)s/clang+llvm-%(version)s-x86_64-apple-darwin.tar.xz
download_name = clang+llvm-%(version)s-x86_64-apple-darwin.tar.xz
download_filename = clang+llvm-%(version)s-x86_64-apple-darwin.tar.xz
strip_leading_dirs = clang+llvm-%(version)s-x86_64-apple-darwin
sha512 = 5240c973f929a7f639735821c560505214a6f0f3ea23807ccc9ba3cf4bc4bd86852c99ba78267415672ab3d3563bc2b0a8495cf7119c3949e400c8c17b56f935
output_path = third_party/llvm-build/Release+Asserts

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = openSUSE
depends = common

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = Ubuntu 18.04 (bionic)
depends = debian_buster

View File

@@ -1,3 +1,3 @@
[basebundle]
[bundle]
display_name = Microsoft Windows
depends = common

View File

@@ -7,7 +7,7 @@
#[third_party/syzygy]
#version = bd0e67f571063e18e7200c72e6152a3a7e4c2a6d
#url = https://github.com/Eloston/syzygy/archive/{version}.tar.gz
#download_name = syzygy-{version}.tar.gz
#download_filename = syzygy-{version}.tar.gz
#strip_leading_dirs = syzygy-{version}
# Use a pre-built LLVM toolchain from LLVM for convenience
@@ -23,7 +23,7 @@
[llvm]
version = 6.0.0
url = http://releases.llvm.org/%(version)s/LLVM-%(version)s-win64.exe
download_name = LLVM-%(version)s-win64.exe
download_filename = LLVM-%(version)s-win64.exe
sha512 = d61b51582f3011f00a130b7e858e36732bb0253d3d17a31d1de1eb8032bec2887caeeae303d2b38b04f517474ebe416f2c6670abb1049225919ff120e56e91d2
extractor = 7z
output_path = third_party/llvm-build/Release+Asserts
@@ -32,7 +32,7 @@ output_path = third_party/llvm-build/Release+Asserts
[gperf]
version = 3.0.1
url = https://sourceforge.net/projects/gnuwin32/files/gperf/%(version)s/gperf-%(version)s-bin.zip/download
download_name = gperf-%(version)s-bin.zip
download_filename = gperf-%(version)s-bin.zip
sha512 = 3f2d3418304390ecd729b85f65240a9e4d204b218345f82ea466ca3d7467789f43d0d2129fcffc18eaad3513f49963e79775b10cc223979540fa2e502fe7d4d9
md5 = f67a2271f68894eeaa1984221d5ef5e5
extractor = 7z
@@ -42,7 +42,7 @@ output_path = third_party/gperf
[bison-bin]
version = 2.4.1
url = https://sourceforge.net/projects/gnuwin32/files/bison/%(version)s/bison-%(version)s-bin.zip/download
download_name = bison-%(version)s-bin.zip
download_filename = bison-%(version)s-bin.zip
md5 = 9d3ccf30fc00ba5e18176c33f45aee0e
sha512 = ea8556c2be1497db96c84d627a63f9a9021423041d81210776836776f1783a91f47ac42d15c46510718d44f14653a2e066834fe3f3dbf901c3cdc98288d0b845
extractor = 7z
@@ -50,7 +50,7 @@ output_path = third_party/bison
[bison-dep]
version = 2.4.1
url = https://sourceforge.net/projects/gnuwin32/files/bison/%(version)s/bison-%(version)s-dep.zip/download
download_name = bison-%(version)s-dep.zip
download_filename = bison-%(version)s-dep.zip
md5 = 6558e5f418483b7c859643686008f475
sha512 = f1ca0737cce547c3e6f9b59202a31b12bbc5a5626b63032b05d7abd9d0f55da68b33ff6015c65ca6c15eecd35c6b1461d19a24a880abcbb4448e09f2fabe2209
extractor = 7z
@@ -58,7 +58,7 @@ output_path = third_party/bison
[bison-lib]
version = 2.4.1
url = https://sourceforge.net/projects/gnuwin32/files/bison/%(version)s/bison-%(version)s-lib.zip/download
download_name = bison-%(version)s-lib.zip
download_filename = bison-%(version)s-lib.zip
md5 = c75406456f8d6584746769b1b4b828d6
sha512 = 7400aa529c6ec412a67de1e96ae5cf43f59694fca69106eec9c6d28d04af30f20b5d4d73bdb5b53052ab848c9fb2925db684be1cf45cbbb910292bf6d1dda091
extractor = 7z
@@ -68,7 +68,7 @@ output_path = third_party/bison
[ninja]
version = 1.8.2
url = https://github.com/ninja-build/ninja/releases/download/v%(version)s/ninja-win.zip
download_name = ninja-win-%(version)s.zip
download_filename = ninja-win-%(version)s.zip
sha512 = 9b9ce248240665fcd6404b989f3b3c27ed9682838225e6dc9b67b551774f251e4ff8a207504f941e7c811e7a8be1945e7bcb94472a335ef15e23a0200a32e6d5
extractor = 7z
output_path = third_party/ninja
@@ -77,7 +77,7 @@ output_path = third_party/ninja
[git]
version = 2.16.3
url = https://github.com/git-for-windows/git/releases/download/v%(version)s.windows.1/PortableGit-%(version)s-64-bit.7z.exe
download_name = PortableGit-%(version)s-64-bit.7z.exe
download_filename = PortableGit-%(version)s-64-bit.7z.exe
sha256 = b8f321d4bb9c350a9b5e58e4330d592410ac6b39df60c5c25ca2020c6e6b273e
extractor = 7z
output_path = third_party/git

View File

@@ -1,46 +0,0 @@
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Invert domain substitution on a specified bundle's patches.
"""
import argparse
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from buildkit import domain_substitution
from buildkit.common import get_logger
from buildkit.config import ConfigBundle
from buildkit.cli import NewBaseBundleAction
sys.path.pop(0)
def main(arg_list=None):
"""CLI entrypoint"""
parser = argparse.ArgumentParser(description=__doc__)
config_group = parser.add_mutually_exclusive_group()
config_group.add_argument(
'-b', '--base-bundle', metavar='NAME', dest='bundle',
action=NewBaseBundleAction,
help=('The base config bundle name to use (located in resources/config_bundles). '
'Mutually exclusive with --user-bundle-path. '))
config_group.add_argument(
'-u', '--user-bundle', metavar='PATH', dest='bundle',
type=lambda x: ConfigBundle(Path(x)),
help=('The path to a user bundle to use. '
'Mutually exclusive with --base-bundle-name. '))
args = parser.parse_args(args=arg_list)
try:
domain_substitution.process_bundle_patches(args.bundle, invert=True)
except ValueError:
get_logger().exception('A regex pair is not invertible')
parser.exit(status=1)
if __name__ == '__main__':
main()

View File

@@ -4,8 +4,7 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Generates updating_patch_order.list in the buildspace for updating patches"""
"""Generates updating_patch_order.list for updating patches"""
import argparse
import sys
@@ -13,21 +12,21 @@ from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from buildkit.common import ENCODING
from buildkit.cli import NewBaseBundleAction
from buildkit.cli import NewBundleAction
sys.path.pop(0)
def main(arg_list=None):
"""CLI entrypoint"""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('base_bundle', action=NewBaseBundleAction,
help='The base bundle to generate a patch order from')
parser.add_argument('--output', metavar='PATH', type=Path,
default='buildspace/updating_patch_order.list',
help='The patch order file to write')
parser.add_argument(
'bundle', action=NewBundleAction, help='The bundle to generate a patch order from')
parser.add_argument('output', type=Path, help='The patch order file to write')
args = parser.parse_args(args=arg_list)
with args.output.open('w', encoding=ENCODING) as file_obj:
file_obj.writelines('%s\n' % x for x in args.base_bundle.patches)
file_obj.writelines('%s\n' % x for x in args.bundle.patch_order)
if __name__ == "__main__":
main()

View File

@@ -3,7 +3,6 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Run Pylint over buildkit"""
import argparse
@@ -14,18 +13,18 @@ sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
import pylint_devutils
sys.path.pop(0)
def main():
"""CLI entrypoint"""
parser = argparse.ArgumentParser(description='Run Pylint over buildkit')
parser.add_argument('--hide-fixme', action='store_true', help='Hide "fixme" Pylint warnings.')
parser.add_argument(
'--hide-fixme', action='store_true',
help='Hide "fixme" Pylint warnings.')
parser.add_argument(
'--show-locally-disabled', action='store_true',
'--show-locally-disabled',
action='store_true',
help='Show "locally-disabled" Pylint warnings.')
args = parser.parse_args()
disable = list()
disable = ['bad-continuation']
if args.hide_fixme:
disable.append('fixme')
@@ -46,5 +45,6 @@ def main():
exit(1)
exit(0)
if __name__ == '__main__':
main()

View File

@@ -3,7 +3,6 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Run Pylint over any module"""
import argparse
@@ -13,6 +12,7 @@ from pathlib import Path
from pylint import epylint as lint
def run_pylint(modulepath, pylint_options):
"""Runs Pylint. Returns a boolean indicating success"""
pylint_stats = Path('/run/user/{}/pylint_stats'.format(os.getuid()))
@@ -34,19 +34,17 @@ def run_pylint(modulepath, pylint_options):
return False
return True
def main():
"""CLI entrypoint"""
parser = argparse.ArgumentParser(description='Run Pylint over an arbitrary module')
parser.add_argument('--hide-fixme', action='store_true', help='Hide "fixme" Pylint warnings.')
parser.add_argument(
'--hide-fixme', action='store_true',
help='Hide "fixme" Pylint warnings.')
parser.add_argument(
'--show-locally-disabled', action='store_true',
'--show-locally-disabled',
action='store_true',
help='Show "locally-disabled" Pylint warnings.')
parser.add_argument(
'modulepath', type=Path,
help='Path to the module to check')
parser.add_argument('modulepath', type=Path, help='Path to the module to check')
args = parser.parse_args()
if not args.modulepath.exists():
@@ -55,6 +53,7 @@ def main():
disables = [
'wrong-import-position',
'bad-continuation',
]
if args.hide_fixme:
@@ -71,5 +70,6 @@ def main():
exit(1)
exit(0)
if __name__ == '__main__':
main()

View File

@@ -10,13 +10,8 @@ alias quilt='quilt --quiltrc -'
# Assume this script lives within the repository
REPO_ROOT=$(dirname $(dirname $(readlink -f ${BASH_SOURCE[0]})))
export QUILT_PATCHES="$REPO_ROOT/resources/patches"
export QUILT_SERIES=$(readlink -f "$REPO_ROOT/buildspace/updating_patch_order.list")
if [ -z "$QUILT_SERIES" ]; then
printf '%s\n' 'ERROR: QUILT_SERIES file not found.' >&2
return
fi
export QUILT_PATCHES="$REPO_ROOT/patches"
export QUILT_SERIES=$(readlink -f "$REPO_ROOT/build/updating_patch_order.list")
# Options below borrowed from Debian and default quilt options (from /etc/quilt.quiltrc on Debian)
export QUILT_PUSH_ARGS="--color=auto"

View File

@@ -3,12 +3,11 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Update binary pruning and domain substitution lists automatically.
It will download and unpack into the buildspace tree as necessary.
No binary pruning or domain substitution will be applied to the buildspace tree after
It will download and unpack into the source tree as necessary.
No binary pruning or domain substitution will be applied to the source tree after
the process has finished.
"""
@@ -18,18 +17,15 @@ import argparse
from pathlib import Path, PurePosixPath
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from buildkit.cli import get_basebundle_verbosely
from buildkit.common import (
BUILDSPACE_DOWNLOADS, BUILDSPACE_TREE, ENCODING, BuildkitAbort, get_logger, dir_empty)
from buildkit.cli import NewBundleAction
from buildkit.common import ENCODING, BuildkitAbort, get_logger, dir_empty
from buildkit.domain_substitution import TREE_ENCODINGS
from buildkit import source_retrieval
from buildkit import downloads
sys.path.pop(0)
# NOTE: Include patterns have precedence over exclude patterns
# pathlib.Path.match() paths to include in binary pruning
PRUNING_INCLUDE_PATTERNS = [
'components/domain_reliability/baked_in_configs/*'
]
PRUNING_INCLUDE_PATTERNS = ['components/domain_reliability/baked_in_configs/*']
# pathlib.Path.match() paths to exclude from binary pruning
PRUNING_EXCLUDE_PATTERNS = [
@@ -72,43 +68,19 @@ PRUNING_EXCLUDE_PATTERNS = [
# NOTE: Domain substitution path prefix exclusion has precedence over inclusion patterns
# Paths to exclude by prefixes of the POSIX representation for domain substitution
DOMAIN_EXCLUDE_PREFIXES = [
'components/test/',
'net/http/transport_security_state_static.json'
]
DOMAIN_EXCLUDE_PREFIXES = ['components/test/', 'net/http/transport_security_state_static.json']
# pathlib.Path.match() patterns to include in domain substitution
DOMAIN_INCLUDE_PATTERNS = [
'*.h',
'*.hh',
'*.hpp',
'*.hxx',
'*.cc',
'*.cpp',
'*.cxx',
'*.c',
'*.h',
'*.json',
'*.js',
'*.html',
'*.htm',
'*.css',
'*.py*',
'*.grd',
'*.sql',
'*.idl',
'*.mk',
'*.gyp*',
'makefile',
'*.txt',
'*.xml',
'*.mm',
'*.jinja*'
'*.h', '*.hh', '*.hpp', '*.hxx', '*.cc', '*.cpp', '*.cxx', '*.c', '*.h', '*.json', '*.js',
'*.html', '*.htm', '*.css', '*.py*', '*.grd', '*.sql', '*.idl', '*.mk', '*.gyp*', 'makefile',
'*.txt', '*.xml', '*.mm', '*.jinja*'
]
# Binary-detection constant
_TEXTCHARS = bytearray({7, 8, 9, 10, 12, 13, 27} | set(range(0x20, 0x100)) - {0x7f})
def _is_binary(bytes_data):
"""
Returns True if the data seems to be binary data (i.e. not human readable); False otherwise
@@ -116,12 +88,13 @@ def _is_binary(bytes_data):
# From: https://stackoverflow.com/a/7392391
return bool(bytes_data.translate(None, _TEXTCHARS))
def should_prune(path, relative_path):
"""
Returns True if a path should be pruned from the buildspace tree; False otherwise
Returns True if a path should be pruned from the source tree; False otherwise
path is the pathlib.Path to the file from the current working directory.
relative_path is the pathlib.Path to the file from the buildspace tree
relative_path is the pathlib.Path to the file from the source tree
"""
# Match against include patterns
for pattern in PRUNING_INCLUDE_PATTERNS:
@@ -141,6 +114,7 @@ def should_prune(path, relative_path):
# Passed all filtering; do not prune
return False
def _check_regex_match(file_path, search_regex):
"""
Returns True if a regex pattern matches a file; False otherwise
@@ -161,12 +135,13 @@ def _check_regex_match(file_path, search_regex):
return True
return False
def should_domain_substitute(path, relative_path, search_regex):
"""
Returns True if a path should be domain substituted in the buildspace tree; False otherwise
Returns True if a path should be domain substituted in the source tree; False otherwise
path is the pathlib.Path to the file from the current working directory.
relative_path is the pathlib.Path to the file from the buildspace tree.
relative_path is the pathlib.Path to the file from the source tree.
search_regex is a compiled regex object to search for domain names
"""
relative_path_posix = relative_path.as_posix().lower()
@@ -178,30 +153,31 @@ def should_domain_substitute(path, relative_path, search_regex):
return _check_regex_match(path, search_regex)
return False
def compute_lists(buildspace_tree, search_regex):
def compute_lists(source_tree, search_regex):
"""
Compute the binary pruning and domain substitution lists of the buildspace tree.
Compute the binary pruning and domain substitution lists of the source tree.
Returns a tuple of two items in the following order:
1. The sorted binary pruning list
2. The sorted domain substitution list
buildspace_tree is a pathlib.Path to the buildspace tree
source_tree is a pathlib.Path to the source tree
search_regex is a compiled regex object to search for domain names
"""
pruning_set = set()
domain_substitution_set = set()
deferred_symlinks = dict() # POSIX resolved path -> set of POSIX symlink paths
buildspace_tree = buildspace_tree.resolve()
for path in buildspace_tree.rglob('*'):
source_tree = source_tree.resolve()
for path in source_tree.rglob('*'):
if not path.is_file():
# NOTE: Path.rglob() does not traverse symlink dirs; no need for special handling
continue
relative_path = path.relative_to(buildspace_tree)
relative_path = path.relative_to(source_tree)
if path.is_symlink():
try:
resolved_relative_posix = path.resolve().relative_to(buildspace_tree).as_posix()
resolved_relative_posix = path.resolve().relative_to(source_tree).as_posix()
except ValueError:
# Symlink leads out of the buildspace tree
# Symlink leads out of the source tree
continue
if resolved_relative_posix in pruning_set:
pruning_set.add(relative_path.as_posix())
@@ -229,48 +205,67 @@ def compute_lists(buildspace_tree, search_regex):
raise BuildkitAbort()
return sorted(pruning_set), sorted(domain_substitution_set)
def main(args_list=None):
"""CLI entrypoint"""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
'-a', '--auto-download', action='store_true',
'-a',
'--auto-download',
action='store_true',
help='If specified, it will download the source code and dependencies '
'for the --base-bundle given. Otherwise, only an existing '
'buildspace tree will be used.')
'for the --bundle given. Otherwise, only an existing '
'source tree will be used.')
parser.add_argument(
'-b', '--base-bundle', metavar='NAME', type=get_basebundle_verbosely,
default='common', help='The base bundle to use. Default: %(default)s')
'-b',
'--bundle',
metavar='PATH',
action=NewBundleAction,
default='config_bundles/common',
help='The bundle to use. Default: %(default)s')
parser.add_argument(
'-p', '--pruning', metavar='PATH', type=Path,
default='resources/config_bundles/common/pruning.list',
'--pruning',
metavar='PATH',
type=Path,
default='config_bundles/common/pruning.list',
help='The path to store pruning.list. Default: %(default)s')
parser.add_argument(
'-d', '--domain-substitution', metavar='PATH', type=Path,
default='resources/config_bundles/common/domain_substitution.list',
'--domain-substitution',
metavar='PATH',
type=Path,
default='config_bundles/common/domain_substitution.list',
help='The path to store domain_substitution.list. Default: %(default)s')
parser.add_argument(
'--tree', metavar='PATH', type=Path, default=BUILDSPACE_TREE,
help=('The path to the buildspace tree to create. '
'If it is not empty, the source will not be unpacked. '
'Default: %(default)s'))
'-t',
'--tree',
metavar='PATH',
type=Path,
required=True,
help=('The path to the source tree to create. '
'If it is not empty, the source will not be unpacked.'))
parser.add_argument(
'--downloads', metavar='PATH', type=Path, default=BUILDSPACE_DOWNLOADS,
help=('The path to the buildspace downloads directory. '
'It must already exist. Default: %(default)s'))
'-c',
'--cache',
metavar='PATH',
type=Path,
required=True,
help=('The path to the downloads cache. '
'It must already exist.'))
try:
args = parser.parse_args(args_list)
if args.tree.exists() and not dir_empty(args.tree):
get_logger().info('Using existing buildspace tree at %s', args.tree)
get_logger().info('Using existing source tree at %s', args.tree)
elif args.auto_download:
source_retrieval.retrieve_and_extract(
args.base_bundle, args.downloads, args.tree, prune_binaries=False)
downloads.retrieve_downloads(args.bundle, args.cache, True)
downloads.check_downloads(args.bundle, args.cache)
downloads.unpack_downloads(args.bundle, args.cache, args.tree)
else:
get_logger().error('No buildspace tree found and --auto-download '
get_logger().error('No source tree found and --auto-download '
'is not specified. Aborting.')
raise BuildkitAbort()
get_logger().info('Computing lists...')
pruning_list, domain_substitution_list = compute_lists(
args.tree, args.base_bundle.domain_regex.search_regex)
args.tree, args.bundle.domain_regex.search_regex)
except BuildkitAbort:
exit(1)
with args.pruning.open('w', encoding=ENCODING) as file_obj:
@@ -278,5 +273,6 @@ def main(args_list=None):
with args.domain_substitution.open('w', encoding=ENCODING) as file_obj:
file_obj.writelines('%s\n' % line for line in domain_substitution_list)
if __name__ == "__main__":
main()

257
devutils/update_patches.py Executable file
View File

@@ -0,0 +1,257 @@
#!/usr/bin/env python3
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Refreshes patches of all configs via quilt until the first patch that
requires manual modification
"""
import argparse
import os
import subprocess
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from buildkit.common import get_logger
from buildkit.config import ConfigBundle
sys.path.pop(0)
_CONFIG_BUNDLES_PATH = Path(__file__).parent.parent / 'config_bundles'
_PATCHES_PATH = Path(__file__).parent.parent / 'patches'
_LOGGER = get_logger(prepend_timestamp=False, log_init=False)
def _get_run_quilt(source_dir, series_path, patches_dir):
"""Create a function to run quilt with proper settings"""
def _run_quilt(*args, log_stderr=True, **kwargs):
result = subprocess.run(
('quilt', '--quiltrc', '-', *args),
universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=str(source_dir),
env={
'QUILT_PATCHES': str(patches_dir.resolve()),
'QUILT_SERIES': str(series_path.resolve()),
'QUILT_PUSH_ARGS': '--color=auto',
'QUILT_DIFF_OPTS': '--show-c-function',
'QUILT_PATCH_OPTS': '--unified --reject-format=unified',
'QUILT_DIFF_ARGS': '-p ab --no-timestamps --no-index --color=auto',
'QUILT_REFRESH_ARGS': '-p ab --no-timestamps --no-index',
'QUILT_COLORS': ('diff_hdr=1;32:diff_add=1;34:diff_rem=1;31:'
'diff_hunk=1;33:diff_ctx=35:diff_cctx=33'),
'QUILT_SERIES_ARGS': '--color=auto',
'QUILT_PATCHES_ARGS': '--color=auto',
},
**kwargs)
if log_stderr and result.stderr:
_LOGGER.warning('Got stderr with quilt args %s: %s', args, result.stderr.rstrip('\n'))
return result
return _run_quilt
def _generate_full_bundle_depends(bundle_path, bundle_cache, unexplored_bundles):
"""
Generates the bundle's and dependencies' dependencies ordered by the deepest dependency first
"""
for dependency_name in reversed(bundle_cache[bundle_path].bundlemeta.depends):
dependency_path = bundle_path.with_name(dependency_name)
if dependency_path in unexplored_bundles:
# Remove the bundle from being explored in _get_patch_trie()
# Since this bundle is a dependency of something else, it must be checked first
# before the dependent
unexplored_bundles.remove(dependency_path)
# First, get all dependencies of the current dependency in order
yield from _generate_full_bundle_depends(dependency_path, bundle_cache, unexplored_bundles)
# Then, add the dependency itself
yield dependency_path
def _get_patch_trie(bundle_cache):
"""
Returns a trie of config bundles and their dependencies. It is a dict of the following format:
key: pathlib.Path of config bundle
value: dict of direct dependents of said bundle, in the same format as the surrounding dict.
"""
# Returned trie
patch_trie = dict()
# Set of bundles that are not children of the root node (i.e. not the lowest dependency)
# It is assumed that any bundle that is not used as a lowest dependency will never
# be used as a lowest dependency. This is the case for mixin bundles.
non_root_children = set()
# All bundles that haven't been added to the trie, either as a dependency or
# in this function explicitly
unexplored_bundles = set(bundle_cache.keys())
# Construct patch_trie
while unexplored_bundles:
current_path = unexplored_bundles.pop()
current_trie_node = patch_trie # The root node of the trie
# Construct a branch in the patch trie up to the closest dependency
# by using the desired traversal to the config bundle.
# This is essentially a depth-first tree construction algorithm
for dependency_path in _generate_full_bundle_depends(current_path, bundle_cache,
unexplored_bundles):
if current_trie_node != patch_trie:
non_root_children.add(dependency_path)
if not dependency_path in current_trie_node:
current_trie_node[dependency_path] = dict()
# Walk to the child node
current_trie_node = current_trie_node[dependency_path]
# Finally, add the dependency itself as a leaf node of the trie
# If the assertion fails, the algorithm is broken
assert current_path not in current_trie_node
current_trie_node[current_path] = dict()
# Remove non-root node children
for non_root_child in non_root_children.intersection(patch_trie.keys()):
del patch_trie[non_root_child]
# Potential optimization: Check if leaves patch the same files as their parents.
# (i.e. if the set of files patched by the bundle is disjoint from that of the parent bundle)
# If not, move them up to their grandparent, rescan the tree leaves, and repeat
# Then, group leaves and their parents and see if the set of files patched is disjoint from
# that of the grandparents. Repeat this with great-grandparents and increasingly larger
# groupings until all groupings end up including the top-level nodes.
# This optimization saves memory by not needing to store all the patched files of
# a long branch at once.
# However, since the trie for the current structure is quite flat and all bundles are
# quite small (except common, which is by far the largest), this isn't necessary for now.
return patch_trie
def _pop_to_last_bundle(run_quilt, patch_order_stack):
"""Helper for _refresh_patches"""
if patch_order_stack:
try:
from_top = filter(len, reversed(patch_order_stack))
# The previous bundle is the second from the top
# of the stack with patches
next(from_top)
pop_to = next(from_top)[-1]
except StopIteration:
run_quilt('pop', '-a', check=True)
else:
if run_quilt('top', check=True).stdout.strip() != pop_to:
# Pop only if the top stack entry had patches.
# A patch can only be applied once in any given branch, so we use
# a comparison of patch names to tell if anything needs to be done.
run_quilt('pop', pop_to, check=True)
def _refresh_patches(patch_trie, bundle_cache, series_path, run_quilt, abort_on_failure):
"""
Refreshes the patches with DFS using GNU Quilt in the trie of config bundles
Returns a boolean indicating if any of the patches have failed
"""
# Stack of iterables over each node's children
# First, insert iterable over root node's children
node_iter_stack = [iter(patch_trie.items())]
# Stack of patch orders to use in generation of quilt series files
# It is initialized to an empty value to be popped by the first bundle in
# node_iter_stack
patch_order_stack = [tuple()]
# Whether any branch had failed validation
had_failure = False
while node_iter_stack:
try:
child_path, grandchildren = next(node_iter_stack[-1])
except StopIteration:
# Finished exploring all children of this node
patch_order_stack.pop()
node_iter_stack.pop()
_pop_to_last_bundle(run_quilt, patch_order_stack)
continue
# Apply children's patches
_LOGGER.info('Verifying at depth %s: %s', len(node_iter_stack), child_path.name)
child_patch_order = tuple()
assert child_path in bundle_cache
try:
child_patch_order = tuple(bundle_cache[child_path].patch_order)
except KeyError:
# No patches in the bundle
pass
patch_order_stack.pop()
patch_order_stack.append(child_patch_order)
branch_validation_failed = False
if patch_order_stack[-1]:
series_path.write_text('\n'.join(map('\n'.join, patch_order_stack)))
for patch_path_str in child_patch_order:
result = run_quilt('push', patch_path_str)
if result.returncode:
_LOGGER.error('Got exit status %s while refreshing %s', result.returncode,
patch_path_str)
if result.stdout:
_LOGGER.error('stdout: %s', result.stdout.rstrip('\n'))
branch_validation_failed = True
had_failure = True
break
result = run_quilt('refresh', check=True)
if branch_validation_failed:
if abort_on_failure:
return had_failure
_pop_to_last_bundle(run_quilt, patch_order_stack)
else: # Patches applied successfully
# Create a placeholder for the child bundle to place a patch order
patch_order_stack.append(tuple())
# Explore this child's children
node_iter_stack.append(iter(grandchildren.items()))
return had_failure
def main():
"""CLI Entrypoint"""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
'-s',
'--source-dir',
type=Path,
required=True,
metavar='DIRECTORY',
help='Path to the source tree')
parser.add_argument(
'-a',
'--abort-on-failure',
action='store_true',
help=('If specified, abort on the first patch that fails to refresh. '
'This allows for one to refresh the rest of the patches in the series.'))
args = parser.parse_args()
if not args.source_dir.exists():
parser.error('Cannot find source tree at: {}'.format(args.source_dir))
patches_dir = Path(os.environ.get('QUILT_PATCHES', 'patches'))
if not patches_dir.exists():
parser.error('Cannot find patches directory at: {}'.format(patches_dir))
series_path = Path(os.environ.get('QUILT_SERIES', 'series'))
if not series_path.exists() and not (patches_dir / series_path).exists(): #pylint: disable=no-member
parser.error('Cannot find series file at "{}" or "{}"'.format(series_path,
patches_dir / series_path))
# Path to bundle -> ConfigBundle without dependencies
bundle_cache = dict(
map(lambda x: (x, ConfigBundle(x, load_depends=False)), _CONFIG_BUNDLES_PATH.iterdir()))
patch_trie = _get_patch_trie(bundle_cache)
run_quilt = _get_run_quilt(args.source_dir, series_path, patches_dir)
# Remove currently applied patches
if series_path.exists():
if run_quilt('top').returncode != 2:
_LOGGER.info('Popping applied patches')
run_quilt('pop', '-a', check=True)
had_failure = _refresh_patches(patch_trie, bundle_cache, series_path, run_quilt,
args.abort_on_failure)
if had_failure:
_LOGGER.error('Error(s) occured while refreshing. See output above.')
parser.exit(status=1)
_LOGGER.info('Successfully refreshed all patches.')
if __name__ == '__main__':
main()

View File

@@ -4,17 +4,16 @@
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Run sanity checking algorithms over the base bundles and patches.
"""Run sanity checking algorithms over the bundles and patches.
It checks the following:
* All patches exist
* All patches are referenced by at least one patch order
* Each patch is used only once in all base bundles
* Whether patch order entries can be consolidated across base bundles
* Each patch is used only once in all bundles
* Whether patch order entries can be consolidated across bundles
* GN flags with the same key and value are not duplicated in inheritance
* Whether GN flags can be consolidated across base bundles
* Whether GN flags can be consolidated across bundles
Exit codes:
* 0 if there are no problems
@@ -27,19 +26,16 @@ import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from buildkit.common import (
CONFIG_BUNDLES_DIR, ENCODING, PATCHES_DIR, BuildkitAbort, get_logger,
get_resources_dir)
from buildkit.config import BASEBUNDLEMETA_INI, BaseBundleMetaIni, ConfigBundle
from buildkit.common import ENCODING, BuildkitAbort, get_logger
from buildkit.config import ConfigBundle
from buildkit.patches import patch_paths_by_bundle
from buildkit.third_party import unidiff
sys.path.pop(0)
BaseBundleResult = collections.namedtuple(
'BaseBundleResult',
('leaves', 'gn_flags', 'patches'))
BundleResult = collections.namedtuple('BundleResult', ('leaves', 'gn_flags', 'patches'))
ExplorationJournal = collections.namedtuple(
'ExplorationJournal',
('unexplored_set', 'results', 'dependents', 'unused_patches'))
'ExplorationJournal', ('unexplored_set', 'results', 'dependents', 'unused_patches'))
def _check_patches(bundle, logger):
"""
@@ -49,7 +45,12 @@ def _check_patches(bundle, logger):
Raises BuildkitAbort if fatal errors occured.
"""
warnings = False
for patch_path in bundle.patches.patch_iter():
try:
bundle.patch_order
except KeyError:
# Bundle has no patch order
return warnings
for patch_path in patch_paths_by_bundle(bundle):
if patch_path.exists():
with patch_path.open(encoding=ENCODING) as file_obj:
try:
@@ -63,7 +64,8 @@ def _check_patches(bundle, logger):
warnings = False
return warnings
def _merge_disjoints(pair_iterable, current_name, logger):
def _merge_disjoints(pair_iterable, current_path, logger):
"""
Merges disjoint sets with errors
pair_iterable is an iterable of tuples (display_name, current_set, dependency_set, as_error)
@@ -80,23 +82,28 @@ def _merge_disjoints(pair_iterable, current_name, logger):
log_func = logger.error
else:
log_func = logger.warning
log_func('%s of "%s" appear at least twice: %s', display_name, current_name,
log_func('%s of "%s" appear at least twice: %s', display_name, current_path,
current_set.intersection(dependency_set))
if as_error:
raise BuildkitAbort()
warnings = True
return warnings
def _populate_set_with_gn_flags(new_set, base_bundle, logger):
def _populate_set_with_gn_flags(new_set, bundle, logger):
"""
Adds items into set new_set from the base bundle's GN flags
Adds items into set new_set from the bundle's GN flags
Entries that are not sorted are logged as warnings.
Returns True if warnings were logged; False otherwise
"""
warnings = False
try:
iterator = iter(base_bundle.gn_flags)
iterator = iter(bundle.gn_flags)
except KeyError:
# No GN flags found
return warnings
except ValueError as exc:
# Invalid GN flags format
logger.error(str(exc))
raise BuildkitAbort()
try:
@@ -105,158 +112,154 @@ def _populate_set_with_gn_flags(new_set, base_bundle, logger):
return warnings
for current in iterator:
if current < previous:
logger.warning(
'In base bundle "%s" GN flags: "%s" should be sorted before "%s"',
base_bundle.name, current, previous)
logger.warning('In bundle "%s" GN flags: "%s" should be sorted before "%s"',
bundle.name, current, previous)
warnings = True
new_set.add('%s=%s' % (current, base_bundle.gn_flags[current]))
new_set.add('%s=%s' % (current, bundle.gn_flags[current]))
previous = current
return warnings
def _populate_set_with_patches(new_set, unused_patches, base_bundle, logger):
def _populate_set_with_patches(new_set, unused_patches, bundle, logger):
"""
Adds entries to set new_set from the base bundle's patch_order if they are unique.
Adds entries to set new_set from the bundle's patch_order if they are unique.
Entries that are not unique are logged as warnings.
Returns True if warnings were logged; False otherwise
"""
warnings = False
for current in base_bundle.patches:
try:
bundle.patch_order
except KeyError:
# Bundle has no patch order
return warnings
for current in bundle.patch_order:
if current in new_set:
logger.warning(
'In base bundle "%s" patch_order: "%s" already appeared once',
base_bundle.name, current)
logger.warning('In bundle "%s" patch_order: "%s" already appeared once',
bundle.bundlemeta.display_name, current)
warnings = True
else:
unused_patches.discard(current)
new_set.add(current)
return warnings
def _explore_base_bundle(current_name, journal, logger):
def _explore_bundle(current_path, journal, logger):
"""
Explore the base bundle given by current_name. Modifies journal
Explore the bundle given by current_path. Modifies journal
Returns True if warnings occured, False otherwise.
Raises BuildkitAbort if fatal errors occured.
"""
warnings = False
if current_name in journal.results:
if current_path in journal.results:
# Node has been explored iff its results are stored
return warnings
# Indicate start of node exploration
try:
journal.unexplored_set.remove(current_name)
journal.unexplored_set.remove(current_path)
except KeyError:
# Exploration has begun but there are no results, so it still must be processing
# its dependencies
logger.error('Dependencies of "%s" are cyclical', current_name)
logger.error('Dependencies of "%s" are cyclical', current_path)
raise BuildkitAbort()
current_base_bundle = ConfigBundle.from_base_name(current_name, load_depends=False)
current_meta = BaseBundleMetaIni(current_base_bundle.path / BASEBUNDLEMETA_INI)
current_bundle = ConfigBundle(current_path, load_depends=False)
# Populate current base bundle's data
current_results = BaseBundleResult(
leaves=set(),
gn_flags=set(),
patches=set())
warnings = _populate_set_with_gn_flags(
current_results.gn_flags, current_base_bundle, logger) or warnings
warnings = _populate_set_with_patches(
current_results.patches, journal.unused_patches, current_base_bundle, logger) or warnings
warnings = _check_patches(
current_base_bundle, logger) or warnings
# Populate current bundle's data
current_results = BundleResult(leaves=set(), gn_flags=set(), patches=set())
warnings = _populate_set_with_gn_flags(current_results.gn_flags, current_bundle,
logger) or warnings
warnings = _populate_set_with_patches(current_results.patches, journal.unused_patches,
current_bundle, logger) or warnings
warnings = _check_patches(current_bundle, logger) or warnings
# Set an empty set just in case this node has no dependents
if current_name not in journal.dependents:
journal.dependents[current_name] = set()
if current_path not in journal.dependents:
journal.dependents[current_path] = set()
for dependency_name in current_meta.depends:
for dependency_path in map(current_path.with_name, current_bundle.bundlemeta.depends):
# Update dependents
if dependency_name not in journal.dependents:
journal.dependents[dependency_name] = set()
journal.dependents[dependency_name].add(current_name)
if dependency_path not in journal.dependents:
journal.dependents[dependency_path] = set()
journal.dependents[dependency_path].add(current_path)
# Explore dependencies
warnings = _explore_base_bundle(dependency_name, journal, logger) or warnings
warnings = _explore_bundle(dependency_path, journal, logger) or warnings
# Merge sets of dependencies with the current
warnings = _merge_disjoints((
('Patches', current_results.patches,
journal.results[dependency_name].patches, False),
('GN flags', current_results.gn_flags,
journal.results[dependency_name].gn_flags, False),
('Dependencies', current_results.leaves,
journal.results[dependency_name].leaves, True),
), current_name, logger) or warnings
('Patches', current_results.patches, journal.results[dependency_path].patches, False),
('GN flags', current_results.gn_flags, journal.results[dependency_path].gn_flags,
False),
('Dependencies', current_results.leaves, journal.results[dependency_path].leaves, True),
), current_path, logger) or warnings
if not current_results.leaves:
# This node is a leaf node
current_results.leaves.add(current_name)
current_results.leaves.add(current_path)
# Store results last to indicate it has been successfully explored
journal.results[current_name] = current_results
journal.results[current_path] = current_results
return warnings
def _check_mergability(info_tuple_list, dependents, logger):
"""
Checks if entries of config files from dependents can be combined into a common dependency
info_tuple_list is a list of tuples (display_name, set_getter)
set_getter is a function that returns the set of dependents for the given base bundle name
set_getter is a function that returns the set of dependents for the given bundle path
"""
warnings = False
set_dict = dict() # display name -> set
for dependency_name in dependents:
for dependency_path in dependents:
# Initialize sets
for display_name, _ in info_tuple_list:
set_dict[display_name] = set()
for dependent_name in dependents[dependency_name]:
for dependent_path in dependents[dependency_path]:
# Keep only common entries between the current dependent and
# other processed dependents for the current dependency
for display_name, set_getter in info_tuple_list:
set_dict[display_name].intersection_update(
set_getter(dependent_name))
set_dict[display_name].intersection_update(set_getter(dependent_path))
# Check if there are any common entries in all dependents for the
# given dependency
for display_name, common_set in set_dict.items():
if common_set:
logger.warning(
'Base bundles %s can combine %s into "%s": %s',
dependents[dependency_name], display_name, dependency_name,
common_set)
logger.warning('Bundles %s can combine %s into "%s": %s',
dependents[dependency_path], display_name, dependency_path,
common_set)
warnings = True
return warnings
def main():
"""CLI entrypoint"""
logger = get_logger(prepend_timestamp=False, log_init=False)
warnings = False
patches_dir = get_resources_dir() / PATCHES_DIR
config_bundles_dir = get_resources_dir() / CONFIG_BUNDLES_DIR
root_dir = Path(__file__).parent.parent
patches_dir = root_dir / 'patches'
config_bundles_dir = root_dir / 'config_bundles'
journal = ExplorationJournal(
# base bundles not explored yet
unexplored_set=set(map(
lambda x: x.name,
config_bundles_dir.iterdir())),
# base bundle name -> namedtuple(leaves=set(), gn_flags=set())
# bundle paths not explored yet
unexplored_set=set(config_bundles_dir.iterdir()),
# bundle path -> namedtuple(leaves=set(), gn_flags=set())
results=dict(),
# dependency -> set of dependents
# dependency -> set of dependent paths
dependents=dict(),
# patches unused by patch orders
unused_patches=set(map(
lambda x: str(x.relative_to(patches_dir)),
filter(lambda x: not x.is_dir(), patches_dir.rglob('*'))))
)
unused_patches=set(
map(lambda x: str(x.relative_to(patches_dir)),
filter(lambda x: not x.is_dir(), patches_dir.rglob('*')))))
try:
# Explore and validate base bundles
# Explore and validate bundles
while journal.unexplored_set:
warnings = _explore_base_bundle(
next(iter(journal.unexplored_set)), journal, logger) or warnings
warnings = _explore_bundle(next(iter(journal.unexplored_set)), journal,
logger) or warnings
# Check for config file entries that should be merged into dependencies
warnings = _check_mergability((
('GN flags', lambda x: journal.results[x].gn_flags),
@@ -272,6 +275,7 @@ def main():
exit(1)
exit(0)
if __name__ == '__main__':
if sys.argv[1:]:
print(__doc__)

539
devutils/validate_patches.py Executable file
View File

@@ -0,0 +1,539 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Validates that all patches apply cleanly against the source tree.
The required source tree files can be retrieved from Google directly.
"""
import argparse
import ast
import collections
import base64
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
from buildkit.common import ENCODING, get_logger, get_chromium_version
from buildkit.config import ConfigBundle
from buildkit.third_party import unidiff
from buildkit.patches import DEFAULT_PATCH_DIR
sys.path.pop(0)
_CONFIG_BUNDLES_PATH = Path(__file__).parent.parent / 'config_bundles'
_PATCHES_PATH = Path(__file__).parent.parent / 'patches'
class _PatchValidationError(Exception):
"""Raised when patch validation fails"""
class _UnexpectedSyntaxError(RuntimeError):
"""Raised when unexpected syntax is used in DEPS"""
class _DepsNodeVisitor(ast.NodeVisitor):
_valid_syntax_types = (ast.mod, ast.expr_context, ast.boolop, ast.Assign, ast.Add, ast.Name,
ast.Dict, ast.Str, ast.NameConstant, ast.List, ast.BinOp)
_allowed_callables = ('Var', )
def visit_Call(self, node): #pylint: disable=invalid-name
"""Override Call syntax handling"""
if node.func.id not in self._allowed_callables:
raise _UnexpectedSyntaxError('Unexpected call of "%s" at line %s, column %s' %
(node.func.id, node.lineno, node.col_offset))
def generic_visit(self, node):
for ast_type in self._valid_syntax_types:
if isinstance(node, ast_type):
super().generic_visit(node)
return
raise _UnexpectedSyntaxError('Unexpected {} at line {}, column {}'.format(
type(node).__name__, node.lineno, node.col_offset))
def _validate_deps(deps_text):
"""Returns True if the DEPS file passes validation; False otherwise"""
try:
_DepsNodeVisitor().visit(ast.parse(deps_text))
except _UnexpectedSyntaxError as exc:
print('ERROR: %s' % exc)
return False
return True
def _deps_var(deps_globals):
"""Return a function that implements DEPS's Var() function"""
def _var_impl(var_name):
"""Implementation of Var() in DEPS"""
return deps_globals['vars'][var_name]
return _var_impl
def _parse_deps(deps_text):
"""Returns a dict of parsed DEPS data"""
deps_globals = {'__builtins__': None}
deps_globals['Var'] = _deps_var(deps_globals)
exec(deps_text, deps_globals) #pylint: disable=exec-used
return deps_globals
def _download_googlesource_file(download_session, repo_url, version, relative_path):
"""
Returns the contents of the text file with path within the given
googlesource.com repo as a string.
"""
if 'googlesource.com' not in repo_url:
raise ValueError('Repository URL is not a googlesource.com URL: {}'.format(repo_url))
full_url = repo_url + '/+/{}/{}?format=TEXT'.format(version, str(relative_path))
get_logger(prepend_timestamp=False, log_init=False).debug('Downloading: %s', full_url)
response = download_session.get(full_url)
response.raise_for_status()
# Assume all files that need patching are compatible with UTF-8
return base64.b64decode(response.text, validate=True).decode('UTF-8')
def _get_dep_value_url(deps_globals, dep_value):
"""Helper for _process_deps_entries"""
if isinstance(dep_value, str):
url = dep_value
elif isinstance(dep_value, dict):
if 'url' not in dep_value:
# Ignore other types like CIPD since
# it probably isn't necessary
return None
url = dep_value['url']
else:
raise NotImplementedError()
if '{' in url:
# Probably a Python format string
url = url.format(**deps_globals['vars'])
if url.count('@') != 1:
raise _PatchValidationError('Invalid number of @ symbols in URL: {}'.format(url))
return url
def _process_deps_entries(deps_globals, child_deps_tree, child_path, deps_use_relative_paths):
"""Helper for _get_child_deps_tree"""
for dep_path_str, dep_value in deps_globals.get('deps', dict()).items():
url = _get_dep_value_url(deps_globals, dep_value)
if url is None:
continue
dep_path = Path(dep_path_str)
if not deps_use_relative_paths:
try:
dep_path = Path(dep_path_str).relative_to(child_path)
except ValueError:
# Not applicable to the current DEPS tree path
continue
grandchild_deps_tree = None # Delaying creation of dict() until it's needed
for recursedeps_item in deps_globals.get('recursedeps', tuple()):
if isinstance(recursedeps_item, str):
if recursedeps_item == str(dep_path):
grandchild_deps_tree = 'DEPS'
else: # Some sort of iterable
recursedeps_item_path, recursedeps_item_depsfile = recursedeps_item
if recursedeps_item_path == str(dep_path):
grandchild_deps_tree = recursedeps_item_depsfile
if grandchild_deps_tree is None:
# This dep is not recursive; i.e. it is fully loaded
grandchild_deps_tree = dict()
child_deps_tree[dep_path] = (*url.split('@'), grandchild_deps_tree)
def _get_child_deps_tree(download_session, current_deps_tree, child_path, deps_use_relative_paths):
"""Helper for _download_source_file"""
repo_url, version, child_deps_tree = current_deps_tree[child_path]
if isinstance(child_deps_tree, str):
# Load unloaded DEPS
deps_globals = _parse_deps(
_download_googlesource_file(download_session, repo_url, version, child_deps_tree))
child_deps_tree = dict()
current_deps_tree[child_path] = (repo_url, version, child_deps_tree)
deps_use_relative_paths = deps_globals.get('use_relative_paths', False)
_process_deps_entries(deps_globals, child_deps_tree, child_path, deps_use_relative_paths)
return child_deps_tree, deps_use_relative_paths
def _download_source_file(download_session, deps_tree, target_file):
"""
Downloads the source tree file from googlesource.com
download_session is an active requests.Session() object
deps_dir is a pathlib.Path to the directory containing a DEPS file.
"""
# The "deps" from the current DEPS file
current_deps_tree = deps_tree
current_node = None
# Path relative to the current node (i.e. DEPS file)
current_relative_path = Path('src', target_file)
previous_relative_path = None
deps_use_relative_paths = False
child_path = None
while current_relative_path != previous_relative_path:
previous_relative_path = current_relative_path
for child_path in current_deps_tree:
try:
current_relative_path = previous_relative_path.relative_to(child_path)
except ValueError:
# previous_relative_path does not start with child_path
continue
# current_node will match with current_deps_tree after the next statement
current_node = current_deps_tree[child_path]
current_deps_tree, deps_use_relative_paths = _get_child_deps_tree(
download_session, current_deps_tree, child_path, deps_use_relative_paths)
break
assert not current_node is None
repo_url, version, _ = current_node
return _download_googlesource_file(download_session, repo_url, version, current_relative_path)
def _initialize_deps_tree():
"""
Initializes and returns a dependency tree for DEPS files
The DEPS tree is a dict has the following format:
key - pathlib.Path relative to the DEPS file's path
value - tuple(repo_url, version, recursive dict here)
repo_url is the URL to the dependency's repository root
If the recursive dict is a string, then it is a string to the DEPS file to load
if needed
download_session is an active requests.Session() object
"""
deps_tree = {
Path('src'): ('https://chromium.googlesource.com/chromium/src.git', get_chromium_version(),
'DEPS')
}
return deps_tree
def _retrieve_remote_files(file_iter):
"""
Retrieves all file paths in file_iter from Google
file_iter is an iterable of strings that are relative UNIX paths to
files in the Chromium source.
Returns a dict of relative UNIX path strings to a list of lines in the file as strings
"""
# Load requests here so it isn't a dependency for local file reading
import requests
files = dict()
deps_tree = _initialize_deps_tree()
with requests.Session() as download_session:
download_session.stream = False # To ensure connection to Google can be reused
for file_path in file_iter:
files[file_path] = _download_source_file(download_session, deps_tree,
file_path).split('\n')
return files
def _retrieve_local_files(file_iter, source_dir):
"""
Retrieves all file paths in file_iter from the local source tree
file_iter is an iterable of strings that are relative UNIX paths to
files in the Chromium source.
Returns a dict of relative UNIX path strings to a list of lines in the file as strings
"""
files = dict()
for file_path in file_iter:
files[file_path] = (source_dir / file_path).read_text().split('\n')
return files
def _generate_full_bundle_depends(bundle_path, bundle_cache, unexplored_bundles):
"""
Generates the bundle's and dependencies' dependencies ordered by the deepest dependency first
"""
for dependency_name in reversed(bundle_cache[bundle_path].bundlemeta.depends):
dependency_path = bundle_path.with_name(dependency_name)
if dependency_path in unexplored_bundles:
# Remove the bundle from being explored in _get_patch_trie()
# Since this bundle is a dependency of something else, it must be checked first
# before the dependent
unexplored_bundles.remove(dependency_path)
# First, get all dependencies of the current dependency in order
yield from _generate_full_bundle_depends(dependency_path, bundle_cache, unexplored_bundles)
# Then, add the dependency itself
yield dependency_path
def _get_patch_trie(bundle_cache, target_bundles=None):
"""
Returns a trie of config bundles and their dependencies. It is a dict of the following format:
key: pathlib.Path of config bundle
value: dict of direct dependents of said bundle, in the same format as the surrounding dict.
"""
# Returned trie
patch_trie = dict()
# Set of bundles that are not children of the root node (i.e. not the lowest dependency)
# It is assumed that any bundle that is not used as a lowest dependency will never
# be used as a lowest dependency. This is the case for mixin bundles.
non_root_children = set()
# All bundles that haven't been added to the trie, either as a dependency or
# in this function explicitly
if target_bundles:
unexplored_bundles = set(target_bundles)
else:
unexplored_bundles = set(bundle_cache.keys())
# Construct patch_trie
while unexplored_bundles:
current_path = unexplored_bundles.pop()
current_trie_node = patch_trie # The root node of the trie
# Construct a branch in the patch trie up to the closest dependency
# by using the desired traversal to the config bundle.
# This is essentially a depth-first tree construction algorithm
for dependency_path in _generate_full_bundle_depends(current_path, bundle_cache,
unexplored_bundles):
if current_trie_node != patch_trie:
non_root_children.add(dependency_path)
if not dependency_path in current_trie_node:
current_trie_node[dependency_path] = dict()
# Walk to the child node
current_trie_node = current_trie_node[dependency_path]
# Finally, add the dependency itself as a leaf node of the trie
# If the assertion fails, the algorithm is broken
assert current_path not in current_trie_node
current_trie_node[current_path] = dict()
# Remove non-root node children
for non_root_child in non_root_children.intersection(patch_trie.keys()):
del patch_trie[non_root_child]
# Potential optimization: Check if leaves patch the same files as their parents.
# (i.e. if the set of files patched by the bundle is disjoint from that of the parent bundle)
# If not, move them up to their grandparent, rescan the tree leaves, and repeat
# Then, group leaves and their parents and see if the set of files patched is disjoint from
# that of the grandparents. Repeat this with great-grandparents and increasingly larger
# groupings until all groupings end up including the top-level nodes.
# This optimization saves memory by not needing to store all the patched files of
# a long branch at once.
# However, since the trie for the current structure is quite flat and all bundles are
# quite small (except common, which is by far the largest), this isn't necessary for now.
return patch_trie
def _modify_file_lines(patched_file, file_lines):
"""Helper for _apply_file_unidiff"""
# Cursor for keeping track of the current line during hunk application
# NOTE: The cursor is based on the line list index, not the line number!
line_cursor = None
for hunk in patched_file:
# Validate hunk will match
if not hunk.is_valid():
raise _PatchValidationError('Hunk is not valid: {}'.format(repr(hunk)))
line_cursor = hunk.target_start - 1
for line in hunk:
normalized_line = line.value.rstrip('\n')
if line.is_added:
file_lines[line_cursor:line_cursor] = (normalized_line, )
line_cursor += 1
elif line.is_removed:
if normalized_line != file_lines[line_cursor]:
raise _PatchValidationError(
"Line '{}' does not match removal line '{}' from patch".format(
file_lines[line_cursor], normalized_line))
del file_lines[line_cursor]
else:
assert line.is_context
if not normalized_line and line_cursor == len(file_lines):
# We reached the end of the file
break
if normalized_line != file_lines[line_cursor]:
raise _PatchValidationError(
"Line '{}' does not match context line '{}' from patch".format(
file_lines[line_cursor], normalized_line))
line_cursor += 1
def _apply_file_unidiff(patched_file, child_files, parent_file_layers):
"""Applies the unidiff.PatchedFile to the files at the current file layer"""
patched_file_path = Path(patched_file.path)
if patched_file.is_added_file:
if patched_file_path in child_files:
assert child_files[patched_file_path] is None
assert len(patched_file) == 1 # Should be only one hunk
assert patched_file[0].removed == 0
assert patched_file[0].target_start == 1
child_files[patched_file_path] = [x.value for x in patched_file[0]]
elif patched_file.is_removed_file:
child_files[patched_file_path] = None
else: # Patching an existing file
assert patched_file.is_modified_file
if patched_file_path not in child_files:
child_files[patched_file_path] = parent_file_layers[patched_file_path].copy()
_modify_file_lines(patched_file, child_files[patched_file_path])
def _test_patches(patch_trie, bundle_cache, patch_cache, orig_files):
"""
Tests the patches with DFS in the trie of config bundles
Returns a boolean indicating if any of the patches have failed
"""
# Stack of iterables over each node's children
# First, insert iterable over root node's children
node_iter_stack = [iter(patch_trie.items())]
# Stack of files at each node differing from the parent
# The root node thus contains all the files to be patched
file_layers = collections.ChainMap(orig_files)
# Whether any branch had failed validation
had_failure = False
while node_iter_stack:
try:
child_path, grandchildren = next(node_iter_stack[-1])
except StopIteration:
# Finished exploring all children of this node
node_iter_stack.pop()
del file_layers.maps[0]
continue
# Add storage for child's patched files
file_layers = file_layers.new_child()
# Apply children's patches
get_logger(
prepend_timestamp=False, log_init=False).info('Verifying at depth %s: %s',
len(node_iter_stack), child_path.name)
# Potential optimization: Use interval tree data structure instead of copying
# the entire array to track only diffs
# Whether the curent patch trie branch failed validation
branch_validation_failed = False
assert child_path in bundle_cache
try:
child_patch_order = bundle_cache[child_path].patch_order
except KeyError:
# No patches in the bundle
pass
else:
for patch_path_str in child_patch_order:
for patched_file in patch_cache[patch_path_str]:
try:
_apply_file_unidiff(patched_file, file_layers.maps[0], file_layers.parents)
except _PatchValidationError as exc:
# Branch failed validation; abort
get_logger(
prepend_timestamp=False, log_init=False).error(
"Error processing file '%s' from patch '%s': %s", patched_file.path,
patch_path_str, str(exc))
branch_validation_failed = True
had_failure = True
break
except BaseException:
# Branch failed validation; abort
get_logger(
prepend_timestamp=False, log_init=False).exception(
"Error processing file '%s' from patch '%s'", patched_file.path,
patch_path_str)
branch_validation_failed = True
had_failure = True
break
if branch_validation_failed:
break
if branch_validation_failed:
# Add blank children to force stack to move onto the next branch
node_iter_stack.append(iter(tuple()))
else:
# Explore this child's children
node_iter_stack.append(iter(grandchildren.items()))
return had_failure
def _load_all_patches(bundle_iter, patch_dir=DEFAULT_PATCH_DIR):
"""Returns a dict of relative UNIX path strings to unidiff.PatchSet"""
unidiff_dict = dict()
for bundle in bundle_iter:
try:
patch_order_iter = iter(bundle.patch_order)
except KeyError:
continue
for relative_path in patch_order_iter:
if relative_path in unidiff_dict:
continue
unidiff_dict[relative_path] = unidiff.PatchSet.from_filename(
str(patch_dir / relative_path), encoding=ENCODING)
return unidiff_dict
def _get_required_files(patch_cache):
"""Returns an iterable of pathlib.Path files needed from the source tree for patching"""
new_files = set() # Files introduced by patches
file_set = set()
for patch_set in patch_cache.values():
for patched_file in patch_set:
if patched_file.is_added_file:
new_files.add(patched_file.path)
elif patched_file.path not in new_files:
file_set.add(Path(patched_file.path))
return file_set
def main():
"""CLI Entrypoint"""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
'-b',
'--bundle',
action='append',
type=Path,
metavar='DIRECTORY',
help=('Verify patches for a config bundle. Specify multiple times to '
'verify multiple bundles. Without specifying, all bundles will be verified.'))
file_source_group = parser.add_mutually_exclusive_group(required=True)
file_source_group.add_argument(
'-l', '--local', type=Path, metavar='DIRECTORY', help='Use a local source tree')
file_source_group.add_argument(
'-r',
'--remote',
action='store_true',
help='Download the required source tree files from Google')
file_source_group.add_argument(
'-c',
'--cache-remote',
type=Path,
metavar='DIRECTORY',
help='(For debugging) Store the required remote files in an empty local directory')
args = parser.parse_args()
if args.cache_remote and not args.cache_remote.exists():
if args.cache_remote.parent.exists():
args.cache_remote.mkdir()
else:
parser.error('Parent of cache path {} does not exist'.format(args.cache_remote))
# Path to bundle -> ConfigBundle without dependencies
bundle_cache = dict(
map(lambda x: (x, ConfigBundle(x, load_depends=False)), _CONFIG_BUNDLES_PATH.iterdir()))
patch_trie = _get_patch_trie(bundle_cache, args.bundle)
patch_cache = _load_all_patches(bundle_cache.values())
required_files = _get_required_files(patch_cache)
if args.local:
orig_files = _retrieve_local_files(required_files, args.local)
else: # --remote and --cache-remote
orig_files = _retrieve_remote_files(required_files)
if args.cache_remote:
for file_path, file_content in orig_files.items():
if not (args.cache_remote / file_path).parent.exists():
(args.cache_remote / file_path).parent.mkdir(parents=True)
with (args.cache_remote / file_path).open('w', encoding=ENCODING) as cache_file:
cache_file.write('\n'.join(file_content))
parser.exit()
had_failure = _test_patches(patch_trie, bundle_cache, patch_cache, orig_files)
if had_failure:
parser.exit(status=1)
if __name__ == '__main__':
main()

5
devutils/yapf_buildkit.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -eux
python3 -m yapf --style '.style.yapf' -e '*/third_party/*' -rpi buildkit

5
devutils/yapf_devutils.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -eux
python3 -m yapf --style '.style.yapf' -rpi $@

View File

@@ -4,7 +4,7 @@
**Statuses of platform support**: Because platform support varies across stable versions, [this Wiki page tracks platform support for the current stable](//github.com/Eloston/ungoogled-chromium/wiki/statuses). *Please check the status before attempting a build or posting an issue*.
**Choosing branches**: The `master` branch contains stable code, and `develop` is for unstable code. Please do not use `develop` unless you know what you are doing.
**Choosing a version**: *It is highly recommended to choose a tag version for building.* `master` and other branches are not guarenteed to be in a working state.
## Contents
@@ -41,41 +41,35 @@ Install base requirements: `# apt install packaging-dev python3 ninja-build`
On Debian 9 (stretch), `stretch-backports` APT source is used to obtain LLVM 6.0. Do NOT use debhelper 11 from backports, as it will be incompatible with other dpkg tools.
#### Setting up the buildspace tree and packaging files
#### Building locally
Procedure for Debian 9 (stretch):
```
mkdir -p buildspace/downloads # Alternatively, buildspace/ can be a symbolic link
./buildkit-launcher.py genbun debian_stretch
./buildkit-launcher.py getsrc
./buildkit-launcher.py subdom
./buildkit-launcher.py genpkg debian --flavor stretch
```
TODO: Investigate using dpkg-source to build a source package
The buildspace tree can be relocated to another system for building if necessary.
#### Invoking build
```
cd buildspace/tree
```sh
mkdir -p build/{downloads,src}
# TODO: Move download commands into custom debian/rules command
python3 -m buildkit downloads retrieve -b config_bundles/debian_stretch -c build/downloads
python3 -m buildkit download unpack -b config_bundles/debian_stretch -c build/downloads build/src
./get_package.py debian_stretch build/src/debian
cd build/src
# Use dpkg-checkbuilddeps (from dpkg-dev) or mk-build-deps (from devscripts) to check for additional packages.
dpkg-buildpackage -b -uc
```
Packages will appear under `buildspace/`.
Packages will appear under `build/`.
#### Building via source package
TODO
#### Notes for Debian derivatives
Ubuntu 18.04 (bionic): Same as Debian 9 except the `ubuntu_bionic` base bundle and the `buster` flavor are used.
Ubuntu 18.04 (bionic): Same as Debian 9 except the `ubuntu_bionic` bundle and the `debian_buster` packaging files are used.
Ubuntu 16.04 (xenial), Ubuntu 17.10 (artful), Debian 8.0 (jessie), and other older versions: See [Other Linux distributions](#other-linux-distributions)
### Windows
**NOTE**: There is no official maintainer for this platform. If there is a problem, please submit a pull request or issue (after checking the status page in the Wiki first).
Google only supports [Windows 7 x64 or newer](https://chromium.googlesource.com/chromium/src/+/64.0.3282.168/docs/windows_build_instructions.md#system-requirements). These instructions are tested on Windows 10 Home x64.
NOTE: The default configuration will build 64-bit binaries for maximum security (TODO: Link some explanation). This can be changed to 32-bit by changing `target_cpu` to `"x32"` (*with* quotes) in the user config bundle GN flags config file (default path is `buildspace/user_bundle/gn_flags.map`
@@ -130,8 +124,6 @@ The buildspace tree can be relocated to another system for building if necessary
### macOS
**NOTE**: There is no official maintainer for this platform. If there is a problem, please submit a pull request or issue (after checking the status page in the Wiki first).
Tested on macOS 10.11-10.13
#### Additional Requirements
@@ -145,60 +137,34 @@ Tested on macOS 10.11-10.13
1. Install Ninja via Homebrew: `brew install ninja`
2. Install GNU coreutils (for `greadlink` in packaging script): `brew install coreutils`
#### Setting up the buildspace tree and packaging files
#### Building
```
mkdir -p buildspace/downloads # Alternatively, buildspace/ can be a symbolic link
./buildkit-launcher.py genbun macos
./buildkit-launcher.py getsrc
./buildkit-launcher.py subdom
./buildkit-launcher.py genpkg macos
```
The buildspace tree can be relocated to another system for building if necessary.
#### Invoking build
```
cd buildspace/tree
chmod +x ungoogled_packaging/build.sh
```sh
mkdir -p build/src/ungoogled_packaging
./get_package.py macos build/src/ungoogled_packaging
cd build/src
./ungoogled_packaging/build.sh
```
A `.dmg` should appear in `buildspace/`
A `.dmg` should appear in `build/`
### Arch Linux
**NOTE**: There is no official maintainer for this platform. If there is a problem, please submit a pull request or issue (after checking the status page in the Wiki first).
There are two methods to build for Arch Linux outlined in the following sections.
#### Use PKGBUILD
These steps are for using a PKGBUILD to create a package. The PKGBUILD handles downloading, unpacking, building, and packaging (which uses a copy of buildkit internally).
A PKGBUILD is used to build on Arch Linux. It handles downloading, unpacking, building, and packaging.
Requirements: Python 3 is needed to generate the PKGBUILD. The PKGBUILD contains build dependency information.
Generate the PKGBUILD:
```
mkdir buildspace
python3 buildkit-launcher.py genpkg -b archlinux archlinux
./get_package.py archlinux ./
```
A PKGBUILD will be generated in `buildspace`. It is a standalone file that can be relocated as necessary.
A PKGBUILD will be generated in the current directory. It is a standalone file that can be relocated as necessary.
#### Create a compressed tar archive
### openSUSE
These steps create an archive of the build outputs.
Requirements: Same as the build dependencies in the PKGBUILD (which can be seen in `resources/packaging/archlinux/PKGBUILD.in`).
The instructions are the same as [Other Linux distributions](#other-linux-distributions), except that the `archlinux` base bundle is used in the `genbun` command.
### OpenSUSE
Tested on OpenSUSE Leap 42.3
Tested on openSUSE Leap 42.3
#### Setting up the build environment
@@ -210,16 +176,14 @@ Follow the following guide to set up Python 3.6.4: [https://gist.github.com/anti
As of Chromium 66.0.3359.117, llvm, lld and clang version 6 or greater is required to avoid compiler errors.
#### Setting up the buildspace tree and packaging files
#### Generate packaging scripts
Before executing the following commands, make sure you are using python 3.6 as was mentioned in the build environment section of this guide.
```
mkdir -p buildspace/downloads
./buildkit-launcher.py genbun opensuse
./buildkit-launcher.py getsrc
./buildkit-launcher.py subdom
./buildkit-launcher.py genpkg opensuse
mkdir -p build/{download_cache,src}
# TODO: The download commands should be moved into the packaging scripts
./get_package.py opensuse build/src/ungoogled_packaging
```
Before proceeding to the build chromium, open a new tab or otherwise exit the python 3.6 virtual environment, as it will cause errors in the next steps.
@@ -241,7 +205,7 @@ EOF
#### Invoking build and installing package
```
cd buildspace/tree
cd build/src
./ungoogled_packaging/setup.sh
cd ~/rpm
rpmbuild -v -bb --clean SPECS/ungoogled-chromium.spec
@@ -253,8 +217,6 @@ The RPM will be located in `~/rpm/RPMS/{arch}/` once rpmbuild has finished. It c
These are for building on Linux distributions that do not have support already. It builds without distribution-optimized flags and patches for maximum compatibility.
**NOTE**: There is no official maintainer for this platform. If there is a problem, please submit a pull request or issue (after checking the status page in the Wiki first).
#### Requirements
TODO: Document all libraries and tools needed to build. For now, see the build dependencies for Debian systems.
@@ -268,82 +230,32 @@ For Debian-based systems, these requirements can be installed via: `# apt instal
* If not building a `.deb` package, replace `packaging-dev` with `python clang-6.0 lld-6.0 llvm-6.0-dev`
#### Setting up the buildspace tree
#### Build a Debian package
First, setup the source tree:
Builds a `.deb` package for any Debian-based system
```
mkdir -p buildspace/downloads
./buildkit-launcher.py genbun linux_portable
./buildkit-launcher.py subdom
```
#### Generating packaging files and invoking build
**Debian package**
Builds a `deb` package for any Debian-based system
```
./buildkit-launcher.py genpkg debian --flavor minimal
# The buildspace tree can be relocated to another system for building
cd buildspace/tree
mkdir build/src
./get_package.py debian_minimal build/src/debian
cd build/src
# TODO: Custom command to download sources
# Use dpkg-checkbuilddeps (from dpkg-dev) or mk-build-deps (from devscripts) to check for additional packages.
# If necessary, change the dependencies in debian/control to accomodate your environment.
# If necessary, modify AR, NM, CC, and CXX variables in debian/rules
dpkg-buildpackage -b -uc
```
Packages will appear in `buildspace/`
Packages will appear in `build/`
**Archive**
Builds a compressed tar archive
#### Build an archive
```
./buildkit-launcher.py genpkg linux_simple
# The buildspace tree can be relocated to another system for building
cd buildspace/tree
mkdir -p build/src
./get_package.py linux_simple build/src/ungoogled_packaging
cd build/src
# Use "export ..." for AR, NM, CC, CXX, or others to specify the compiler to use
# It defaults to LLVM tools. See ./ungoogled_packaging/build.sh for more details
./ungoogled_packaging/build.sh
./ungoogled_packaging/package.sh
```
A compressed tar archive will appear in `buildspace/tree/ungoogled_packaging/`
## Advanced building information
This section holds some information about building for unsupported systems and a rough building outline.
It is recommended to have an understanding of [DESIGN.md](DESIGN.md).
**Note for unsupported systems**: There is no set procedure for building ungoogled-chromium on unsupported systems. One should already be able to build Chromium for their system before attempting to include ungoogled-chromium changes. More information about the Chromium build procedure is on [the Chromium project website](https://www.chromium.org/Home). One should also understand [DESIGN.md](DESIGN.md) before including ungoogled-chromium changes.
### Essential building requirements
Here are the essential building requirements:
* Python 3 (tested on 3.5) for running buildkit
* Python 2 (tested on 2.7) for building GN and running other scripts
* [Ninja](//ninja-build.org/) for running the build command
* (For developers) [Quilt](//savannah.nongnu.org/projects/quilt/) is recommended for patch management.
* [python-quilt](//github.com/bjoernricks/python-quilt) can be used as well.
Alternatively, [depot_tools](//www.chromium.org/developers/how-tos/install-depot-tools) can provide Python 2 and Ninja.
### Outline building procedure
This section has a rough outline of the entire building procedure.
In the following steps, `buildkit` represents the command to invoke buildkit's CLI.
Note that each buildkit command has a help page associated with it. Pass in `-h` or `--help` for more information.
1. Create `buildspace/` and `buildspace/downloads`. Other directories are created already.
2. Generate a user config bundle from a base config bundle: `buildkit genbun base_bundle`
3. Modify the user config bundle (default location is `buildspace/user_bundle`)
4. Create the buildspace tree: `buildkit getsrc`
5. Apply domain substitution: `buildkit subdom`
6. Generate packaging files into the buildspace tree: `buildkit genpkg package_type [options]`
7. Relocate the buildspace tree (with packaging files) to the proper machine for building.
8. Invoke the packaging scripts to build and package ungoogled-chromium.
A compressed tar archive will appear in `build/src/ungoogled_packaging/`

View File

@@ -12,7 +12,6 @@ ungoogled-chromium consists of the following major components:
* [Patches](#patches)
* [Packaging](#packaging)
* [buildkit](#buildkit)
* [Buildspace](#buildspace)
The following sections describe each component.
@@ -48,45 +47,40 @@ Config files are usually stored in a [configuration bundle](#configuration-bundl
*Also known as config bundles, or bundles.*
Configuration bundles are a collection of config files grouped by system, platform, or target. They are stored as filesystem directories containing the config files. There are two kinds of config bundles:
Configuration bundles are a collection of config files grouped by system, platform, or target. They are stored as filesystem directories containing the config files.
* *Base bundles* - Bundles included in ungoogled-chromium (which reside under `resources/config_bundles`). They are generally used for creating user bundles. All base bundles must include `basebundlemeta.ini`. Unlike user bundles, all base bundles' patches are stored in `resources/patches`.
Many configurations share a lot in common. To reduce duplication, bundles can depend on other bundles by specifying a list of dependencies in the `depends` key of `bundlemeta.ini`. When dependencies are present, bundles only contain the config file data that is modified in or added to its dependencies. The following are additional points about bundle dependencies:
* Direct dependencies for any one bundle are ordered; the ordering specifies how dependency configuration is resolved in a consistent manner.
* This ordering is determined by the order in which they appear in the `depends` key of `bundlemeta.ini`; dependencies are applied from right to left just like multiple inheritance in Python, i.e. dependencies appearing first will have their lists appended to that of subsequent dependencies, and have their mapping and INI values take precedence over subsequent depencencies.
* The graph of all bundle dependency relationships must be representable by a [polytree](https://en.wikipedia.org/wiki/Polytree) to be valid.
* Due to the direct dependency ordering and polytree requirements, all dependencies for a bundle can be resolved to a consistent sequence. This sequence is known as the *dependency order*.
* Bundles may depend on mixins. Mixins are like bundles, but they are only used as dependencies for bundles or other mixins, and their names are always prefixed with `_mixin`. This means that mixins are not valid configurations; they only contain partial data. These are similar in idea to mixins in Python.
Many configurations share a lot in common. To reduce duplication, base bundles can depend on other base bundles by specifying a list of dependencies in the `depends` key of `basebundlemeta.ini`. When dependencies are present, base bundles only contain the config file data that is modified in or added to its dependencies. The following are additional points about base bundle dependencies:
* Direct dependencies for any one base bundle are ordered; the ordering specifies how dependency configuration is resolved in a consistent manner.
* This ordering is determined by the order in which they appear in the `depends` key of `basebundlemeta.ini`; dependencies are applied from right to left just like multiple inheritance in Python.
* The graph of all base bundle dependency relationships must be representable by a [polytree](https://en.wikipedia.org/wiki/Polytree) to be valid.
* Due to the direct dependency ordering and polytree requirements, all dependencies for a base bundle can be resolved to a consistent sequence. This sequence is known as the *dependency order*.
* Base bundles may depend on mixins. Mixins are like base bundles, but they are only used as dependencies for base bundles or other mixins, and their names are always prefixed with `_mixin`. This means that mixins are not valid configurations; they only contain partial data. These are similar in idea to mixins in Python.
Bundles merge config file types from its dependencies in the following manner (config file types are explained in [the Configuration Files section](#configuration-files)):
* `.list` - List files are joined in the dependency order.
* `.map` - Entries (key-value pairs) are collected together. If a key exists in two or more dependencies, the subsequent dependencies in the dependency order have precedence.
* `.ini` - Sections are collected together. If a section exists in two or more dependencies, its keys are resolved in an identical manner as mapping config files.
Base bundles merge config file types from its dependencies in the following manner (config file types are explained in [the Configuration Files section](#configuration-files)):
* `.list` - List files are joined in the dependency order.
* `.map` - Entries (key-value pairs) are collected together. If a key exists in two or more dependencies, the subsequent dependencies in the dependency order have precedence.
* `.ini` - Sections are collected together. If a section exists in two or more dependencies, its keys are resolved in an identical manner as mapping config files.
Bundles vary in specificity; some apply across multiple kinds of systems, and some apply to a specific family. For example:
* Each family of Linux distributions should have their own bundle (e.g. Debian, Fedora)
* Each distribution within that family can have their own bundle ONLY if they cannot be combined (e.g. Debian and Ubuntu)
* Each version for a distribution can have their own bundle ONLY if the versions in question cannot be combined and should be supported simultaneously (e.g. Debian testing and stable, Ubuntu LTS and regular stables)
* Custom Linux systems for personal or limited use **should not** have a bundle (such modifications should take place in the packaging scripts).
Base bundles vary in specificity; some apply across multiple kinds of systems, and some apply to a specific family. However, no base bundle may become more specific than a "public" system variant; since there is no concrete definition, the policy for Linux distribution base bundles is used to illustrate:
* Each family of Linux distributions should have their own base bundle (e.g. Debian, Fedora)
* Each distribution within that family can have their own base bundle ONLY if they cannot be combined (e.g. Debian and Ubuntu)
* Each version for a distribution can have their own base bundle ONLY if the versions in question cannot be combined and should be supported simultaneously (e.g. Debian testing and stable, Ubuntu LTS and regular stables)
* Custom Linux systems for personal or limited use **should not** have a base bundle.
Among the multiple base bundles and mixins, here are a few noteworthy ones:
* `common` - The base bundle used by all other base bundles. It contains most, if not all, of the feature-implementing configuration.
* `linux_rooted` - The base bundle used by Linux base bundles that build against system libraries.
* `linux_portable` - The base bundle used for building with minimal dependency on system libraries. It is more versatile than `linux_rooted` since it is less likely to break due to system library incompatibility.
* *User bundles* - Bundles intended for use in building. They cannot have dependencies, so they must contain all configuration data. They are usually generated from base bundles, from which they can be modified by the user. Unlike base bundles, all patches used must be contained within the user bundle.
Among the multiple bundles and mixins, here are a few noteworthy ones:
* `common` - The bundle used by all other bundles. It contains most, if not all, of the feature-implementing configuration.
* `linux_rooted` - The bundle used by Linux bundles that build against system libraries.
* `linux_portable` - The bundle used for building with minimal dependency on system libraries. It is more versatile than `linux_rooted` since it is less likely to break due to system library incompatibility.
Config bundles can only contain the following files:
* `bundlemeta.ini` - Metadata for the bundle.
* `pruning.list` - [See the Source File Processors section](#source-file-processors)
* `domain_regex.list` - [See the Source File Processors section](#source-file-processors)
* `domain_substitution.list` - [See the Source File Processors section](#source-file-processors)
* `extra_deps.ini` - Extra archives to download and unpack into the buildspace tree. This includes code not bundled in the Chromium source code archive that is specific to a non-Linux platform. On platforms such as macOS, this also includes a pre-built LLVM toolchain for covenience (which can be removed and built from source if desired).
* `downloads.ini` - Archives to download and unpack into the buildspace tree. This includes code not bundled in the Chromium source code archive that is specific to a non-Linux platform. On platforms such as macOS, this also includes a pre-built LLVM toolchain for covenience (which can be removed and built from source if desired).
* `gn_flags.map` - GN arguments to set before building.
* `patch_order.list` - The series of patches to apply with paths relative to the `patches/` directory (whether they be in `resources/` or the bundle itself).
* `version.ini` - Tracks the the Chromium version to use, the ungoogled-chromium revision, and any configuration-specific version information.
* `basebundlemeta.ini` *(Base config bundles only)* - See the description of base bundles above.
* `patches/` *(User config bundles only)* - Contains the patches referenced by `patch_order.list`. [See the Patches section](#patches) for more details.
* `patch_order.list` - The series of patches to apply with paths relative to the `patches/` directory.
### Source File Processors
@@ -108,7 +102,7 @@ The regular expressions to use are listed in `domain_regex.list`; the search and
### Patches
All of ungoogled-chromium's patches for the Chromium source code are located in `resources/patches`. The patches in this directory are referenced by base config bundles' `patch_order.list` config file. When a user config bundle is created, only the patches required by the user bundle's `patch_order.list` config file are copied from `resources/patches` into the user bundle's `patches/` directory.
All of ungoogled-chromium's patches for the Chromium source code are located in `patches/`. The patches in this directory are referenced by config bundles' `patch_order.list` config file.
A file with the extension `.patch` is patch using the [unified format](https://en.wikipedia.org/wiki/Diff_utility#Unified_format). The requirements and recommendations for patch files are as follows:
@@ -145,17 +139,19 @@ Packaging is the process of producing a distributable package for end-users. Thi
Building the source code consists of the following steps:
1. Apply patches
2. Build GN via `tools/gn/bootstrap/bootstrap.py`
3. Run `gn gen` with the GN flags
4. Build Chromium via `ninja`
1. Prune binaries
2. Apply patches
3. Substitute domains
4. Build GN via `tools/gn/bootstrap/bootstrap.py`
5. Run `gn gen` with the GN flags
6. Build Chromium via `ninja`
Packaging consists of packaging types; each type has differing package outputs and invocation requirements. Some packaging types divide the building and package generation steps; some have it all-in-one. The current packaging types are as follows:
* `archlinux` - Generates a PKGBUILD that downloads, builds, and packages Chromium. Unlike other packaging types, this type does not use the buildspace tree; it is a standalone script that automates the entire process.
* `debian` - Generates `debian` directories for building `.deb.` packages for Debian and derivative systems. There are different variants of Debian packaging scripts known as *flavors*. The current flavors are:
* (debian codename here) - For building on the Debian version with the corresponding code name. They are derived from Debian's `chromium` package, with only a few modifications. Older codenames are built upon newer ones. These packaging types are intended to be used with derivatives of the `linux_rooted` base bundle.
* `minimal` - For building with a derivative of the `linux_portable` base bundle.
* (debian codename here) - For building on the Debian version with the corresponding code name. They are derived from Debian's `chromium` package, with only a few modifications. Older codenames are built upon newer ones. These packaging types are intended to be used with derivatives of the `linux_rooted` bundle.
* `minimal` - For building with a derivative of the `linux_portable` bundle.
* `linux_simple` - Generates two shell scripts for Linux. The first applies patches and builds Chromium. The second packages the build outputs into a compressed tar archive.
* `macos` - Generates a shell script for macOS to build Chromium and package the build outputs into a `.dmg`.
@@ -163,11 +159,11 @@ The directories in `resources/packaging` correspond to the packaging type names.
## buildkit
buildkit is a Python 3 library and CLI application for building ungoogled-chromium. Its main purpose is to setup the buildspace tree and any requested building or packaging scripts from the `resources/` directory.
buildkit is a Python 3 library and CLI application for building ungoogled-chromium. It is designed to be used by the packaging process to assist in building and some of packaging.
Use `buildkit-launcher.py` to invoke the buildkit CLI. Pass in `-h` or `--help` for usage details.
For examples of using buildkit's CLI, see [BUILDING.md](BUILDING.md).
For examples of using buildkit's CLI, see [docs/building.md](docs/building.md).
There is currently no API documentation for buildkit. However, all public classes, functions, and methods have docstrings that explain their usage and behavior.
@@ -176,14 +172,3 @@ There is currently no API documentation for buildkit. However, all public classe
buildkit should be simple and transparent instead of limited and intelligent when it is reasonable. As an analogy, buildkit should be like git in terms of the scope and behavior of functionality (e.g. subcommands) and as a system in whole.
buildkit should be as configuration- and platform-agnostic as possible. If there is some new functionality that is configuration-dependent or would require extending the configuration system (e.g. adding new config file types), it is preferred for this to be added to packaging scripts (in which scripts shared among packaging types are preferred over those for specific types).
## Buildspace
Buildspace is a directory that contains all intermediate and final files for building. Its default location is in the repository directory as `buildspace/`. The directory structure is as follows:
* `tree` - The Chromium source tree, which also contains build intermediates.
* `downloads` - Directory containing all files download; this is currently the Chromium source code archive and any potential extra dependencies.
* `user_bundle` - The user config bundle used for building.
* Packaged build artifacts
(The directory may contain additional files if developer utilities are used)

View File

@@ -1,6 +1,14 @@
# Development notes and procedures
The [GitHub Wiki](//github.com/Eloston/ungoogled-chromium/wiki) contains some additional information that changes more frequently.
This document contains an assortment of information for those who want to develop ungoogled-chromium.
Information targeted towards developers *and* other users live in [the Wiki](//ungoogled-software.github.io/ungoogled-chromium-wiki/).
## Branches
Development is focused on `master`, and any changes in there should not break anything unless platforms break during a Chromium version rebase.
Features that require some time to achieve completion must be done in a separate branch. Once it is ready, then it can be merged into `master` and the branch should be removed.
## Adding command-line flags and `chrome://flags` options
@@ -10,15 +18,15 @@ For new flags, first add a constant to `third_party/ungoogled/ungoogled_switches
## Notes on updating base bundles
To develop a better understanding of base bundles, have a look through [DESIGN.md](DESIGN.md) *and* the existing base bundles. Reading only DESIGN.md may make it difficult to develop intuition of the configuration system, and only exploring existing base bundles may not lead you to the whole picture.
To develop a better understanding of base bundles, have a look through [docs/design.md](docs/design.md) *and* the existing base bundles. Reading only docs/design.md may make it difficult to develop intuition of the configuration system, and only exploring existing base bundles may not lead you to the whole picture.
Anytime the base bundles or patches are modified, use `developer_utilities/validate_config.py` to run several sanity checking algorithms.
Anytime the base bundles or patches are modified, use `devutils/validate_config.py` to run several sanity checking algorithms.
## Workflow of updating patches
Tested on Debian 9.0 (stretch). Exact instructions should work on any other Linux or macOS system with the proper dependencies.
It is recommended to read the [BUILDING.md](BUILDING.md) and [DESIGN.md](DESIGN.md) documents first to gain a deeper understanding of the process.
It is recommended to read the [docs/building.md](docs/building.md) and [docs/design.md](docs/design.md) documents first to gain a deeper understanding of the process.
### Dependencies
@@ -33,23 +41,29 @@ This is an example workflow on Linux that can be modified for your specific usag
### Downloading the source code and updating lists
The utility `developer_utilities/update_lists.py` automates this process. By default, it will update the `common` base bundle automatically. Pass in `-h` or `--help` for availabe options.
The utility `devutils/update_lists.py` automates this process. By default, it will update the `common` base bundle automatically. Pass in `-h` or `--help` for availabe options.
Here's an example for updating the `common` configuration type:
```
mkdir -p buildspace/downloads
./developer_utilities/update_lists.py --auto-download
mkdir -p build/downloads
./devutils/update_lists.py --auto-download -c build/downloads -t build/src
```
The resulting source tree in `build/src` will not have binaries pruned or domains substituted.
#### Updating patches
**IMPORTANT**: Make sure domain substitution has not been applied before continuing. Otherwise, the resulting patches will require domain substitution.
1. Setup a buildspace tree without domain substitution. For the `common` base bundle: `./buildkit-launcher.py getsrc -b common`
2. Generate a temporary patch order list for a given base bundle. For the `common` base bundle: `developer_utilities/generate_patch_order.py common`
3. Run `source $ROOT/developer_utilities/set_quilt_vars.sh`
* This will setup quilt to modify patches directly in `resources/`
1. Setup a source tree without domain substitution. For the `common` bundle:
1. `python3 -m buildkit downloads retrieve -b config_bundles/common -c build/downloads`
2. `python3 -m buildkit downloads unpack -b config_bundles/common -c build/downloads build/src`
2. Run `source devutils/set_quilt_vars.sh`
* This will setup quilt to modify patches directly in `patches/`
3. Conditional step:
* If updating all patches, run `./devutils/update_patches.py -as build/src`. If successful, then everything is done. Otherwise, continue on to the next step.
* If updating patches for a specific bundle, run `./devutils/generate_patch_order.py BUNDLE_PATH_HERE build/updating_patch_order.list` and continue on to the next step.
4. Use `quilt` to update the patches from the buildspace tree. The general procedure is as follows:
1. Make sure all patches are unapplied: `quilt pop -a`. Check the status with `quilt top`
2. Refresh patches that have fuzz or offsets until the first patch that can't apply: `while quilt push; do quilt refresh; done`
@@ -57,8 +71,15 @@ mkdir -p buildspace/downloads
4. Edit the broken files as necessary by adding (`quilt edit ...` or `quilt add ...`) or removing (`quilt remove ...`) files as necessary
* When removing large chunks of code, remove each line instead of using language features to hide or remove the code. This makes the patches less susceptible to breakages when using quilt's refresh command (e.g. quilt refresh updates the line numbers based on the patch context, so it's possible for new but desirable code in the middle of the block comment to be excluded.). It also helps with readability when someone wants to see the changes made based on the patch alone.
5. Refresh the patch: `quilt refresh`
6. Go back to Step 2, and repeat this process until all of the patches have been fixed.
7. Run `developer_utilities/validate_config.py` to do a sanity check of the patches and patch order.
6. Go back to Step 2, and repeat this process until all of the patches in the series have been fixed.
7. Conditional step:
* If updating all patches, run `./devutils/update_patches.py -as build/src`. If successful, then continue onto the next step. Otherwise, go back to Step 2.
* If updating patches for a specific bundle, then continue on to the next step.
5. Run `./devutils/validate_config.py`
6. Run `quilt pop -a`
7. Conditional step:
* If updating all patches, run `devutils/validate_patches.py -l build/src`. If errors occur, go back to Step 3.
* If updating patches for a specific bundle, add `-b BUNDLE_PATH_HERE` to the command for all patches above. If errors occur, go back to Step 3.
This should leave unstaged changes in the git repository to be reviewed, added, and committed.
@@ -66,12 +87,11 @@ If you used `quilt new` anywhere during the update process, remember to add that
### Steps for fixing patches after a failed build attempt
If domain substitution is not applied, then the steps from the previous section (steps 2-6) will work for revising patches.
If domain substitution is not applied, then the steps from the previous section will work for revising patches.
If domain substitution is applied, then the steps for the initial update will not apply since that would create patches which depend on domain substitution (which is undesirable for use cases that don't use domain substitution). Here is a method of dealing with this:
If domain substitution is applied, then the steps for the initial update will not apply since that would create patches which depend on domain substitution. Here is a method of dealing with this:
1. Use quilt to update the domain-substituted copy of the patch set
2. Copy back modified patches to the repository after reverting domain substitution on the patches manually
3. Run `developer_utilities/invert_domain_substitution.py` to invert the patches by specifying the proper base bundle.
3. Attempt a build.
4. Repeat entire procedure if there is a failure.
1. Revert domain substitution: `python3 -m buildkit domains revert -c CACHE_PATH_HERE build/src`
2. Follow the patch updating section above
3. Reapply domain substitution: `python3 -m buildkit domains apply -b BUNDLE_PATH_HERE -c CACHE_PATH_HERE build/src`
4. Reattempt build. Repeat steps as necessary.

208
get_package.py Executable file
View File

@@ -0,0 +1,208 @@
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
# Copyright (c) 2018 The ungoogled-chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Simple package script generator.
"""
import argparse
import re
import shutil
import string
import subprocess
from pathlib import Path
from buildkit.common import (ENCODING, BuildkitAbort, get_logger, validate_and_get_ini,
get_chromium_version, get_release_revision)
from buildkit.third_party import schema
# Constants
_ROOT_DIR = Path(__file__).resolve().parent
_PACKAGING_ROOT = _ROOT_DIR / 'packaging'
_PKGMETA = _PACKAGING_ROOT / 'pkgmeta.ini'
_PKGMETA_SCHEMA = schema.Schema({
schema.Optional(schema.And(str, len)): {
schema.Optional('depends'): schema.And(str, len),
schema.Optional('buildkit_copy'): schema.And(str, len),
}
})
# Classes
class _BuildFileStringTemplate(string.Template):
"""
Custom string substitution class
Inspired by
http://stackoverflow.com/questions/12768107/string-substitutions-using-templates-in-python
"""
pattern = r"""
{delim}(?:
(?P<escaped>{delim}) |
_(?P<named>{id}) |
{{(?P<braced>{id})}} |
(?P<invalid>{delim}((?!_)|(?!{{)))
)
""".format(
delim=re.escape("$ungoog"), id=string.Template.idpattern)
# Methods
def _process_templates(root_dir, build_file_subs):
"""
Recursively substitute '$ungoog' strings in '.ungoogin' template files and
remove the suffix
"""
for old_path in root_dir.rglob('*.ungoogin'):
new_path = old_path.with_name(old_path.stem)
old_path.replace(new_path)
with new_path.open('r+', encoding=ENCODING) as new_file:
content = _BuildFileStringTemplate(new_file.read()).substitute(**build_file_subs)
new_file.seek(0)
new_file.write(content)
new_file.truncate()
def _get_current_commit():
"""
Returns a string of the current commit hash.
It assumes "git" is in PATH, and that buildkit is run within a git repository.
Raises BuildkitAbort if invoking git fails.
"""
result = subprocess.run(
['git', 'rev-parse', '--verify', 'HEAD'],
stdout=subprocess.PIPE,
universal_newlines=True,
cwd=str(Path(__file__).resolve().parent))
if result.returncode:
get_logger().error('Unexpected return code %s', result.returncode)
get_logger().error('Command output: %s', result.stdout)
raise BuildkitAbort()
return result.stdout.strip('\n')
def _get_package_dir_list(package, pkgmeta):
"""
Returns a list of pathlib.Path to packaging directories to be copied,
ordered by dependencies first.
Raises FileNotFoundError if a package directory cannot be found.
"""
package_list = list()
current_name = package
while current_name:
package_list.append(_PACKAGING_ROOT / current_name)
if not package_list[-1].exists(): #pylint: disable=no-member
raise FileNotFoundError(package_list[-1])
if current_name in pkgmeta and 'depends' in pkgmeta[current_name]:
current_name = pkgmeta[current_name]['depends']
else:
break
package_list.reverse()
return package_list
def _get_package_files(package_dir_list):
"""Yields tuples of relative and full package file paths"""
resolved_files = dict()
for package_dir in package_dir_list:
for file_path in package_dir.rglob('*'):
relative_path = file_path.relative_to(package_dir)
resolved_files[relative_path] = file_path
yield from sorted(resolved_files.items())
def _get_buildkit_copy(package, pkgmeta):
"""
Returns a pathlib.Path relative to the output directory to copy buildkit and bundles to,
otherwise returns None if buildkit does not need to be copied.
"""
while package:
if package in pkgmeta:
if 'buildkit_copy' in pkgmeta[package]:
return Path(pkgmeta[package]['buildkit_copy'])
if 'depends' in pkgmeta[package]:
package = pkgmeta[package]['depends']
else:
break
else:
break
return None
def main(): #pylint: disable=too-many-branches
"""CLI Entrypoint"""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('name', help='Name of packaging to generate')
parser.add_argument('destination', type=Path, help='Directory to store packaging files')
args = parser.parse_args()
# Argument validation
if not args.destination.parent.exists():
parser.error('Destination parent directory "{}" does not exist'.format(
args.destination.parent))
if not _PACKAGING_ROOT.exists(): #pylint: disable=no-member
parser.error('Cannot find "packaging" directory next to this script')
packaging_dir = _PACKAGING_ROOT / args.name
if not packaging_dir.exists():
parser.error('Packaging "{}" does not exist'.format(args.name))
if not _PKGMETA.exists(): #pylint: disable=no-member
parser.error('Cannot find pkgmeta.ini in packaging directory')
if not args.destination.exists():
args.destination.mkdir()
# Copy packaging files to destination
pkgmeta = validate_and_get_ini(_PKGMETA, _PKGMETA_SCHEMA)
for relative_path, actual_path in _get_package_files(_get_package_dir_list(args.name, pkgmeta)):
if actual_path.is_dir():
if not (args.destination / relative_path).exists():
(args.destination / relative_path).mkdir()
shutil.copymode(str(actual_path), str(args.destination / relative_path))
else:
shutil.copy(str(actual_path), str(args.destination / relative_path))
# Substitute .ungoogin files
packaging_subs = dict(
chromium_version=get_chromium_version(),
release_revision=get_release_revision(),
current_commit=_get_current_commit(),
)
_process_templates(args.destination, packaging_subs)
# Copy buildkit and config files, if necessary
buildkit_copy_relative = _get_buildkit_copy(args.name, pkgmeta)
if buildkit_copy_relative:
if not (args.destination / buildkit_copy_relative).exists():
(args.destination / buildkit_copy_relative).mkdir()
shutil.copy(
str(_ROOT_DIR / 'version.ini'),
str(args.destination / buildkit_copy_relative / 'version.ini'))
if (args.destination / buildkit_copy_relative / 'buildkit').exists():
shutil.rmtree(str(args.destination / buildkit_copy_relative / 'buildkit'))
shutil.copytree(
str(_ROOT_DIR / 'buildkit'),
str(args.destination / buildkit_copy_relative / 'buildkit'))
if (args.destination / buildkit_copy_relative / 'patches').exists():
shutil.rmtree(str(args.destination / buildkit_copy_relative / 'patches'))
shutil.copytree(
str(_ROOT_DIR / 'patches'), str(args.destination / buildkit_copy_relative / 'patches'))
if (args.destination / buildkit_copy_relative / 'config_bundles').exists():
shutil.rmtree(str(args.destination / buildkit_copy_relative / 'config_bundles'))
shutil.copytree(
str(_ROOT_DIR / 'config_bundles'),
str(args.destination / buildkit_copy_relative / 'config_bundles'))
if __name__ == '__main__':
main()

View File

@@ -31,11 +31,11 @@ conflicts=('chromium')
source=(https://commondatastorage.googleapis.com/chromium-browser-official/chromium-$pkgver.tar.xz
chromium-launcher-$_launcher_ver.tar.gz::https://github.com/foutrelis/chromium-launcher/archive/v$_launcher_ver.tar.gz
chromium-$pkgver.txt::https://chromium.googlesource.com/chromium/src/+/$pkgver?format=TEXT
'$ungoog{repo_url}')
'https://github.com/Eloston/ungoogled-chromium/archive/$ungoog{current_commit}.tar.gz')
sha256sums=('73bfa25d41c432ba5a542b20043b62118bc8451bb9e031edc7394cc65d6b5d64'
'04917e3cd4307d8e31bfb0027a5dce6d086edb10ff8a716024fbb8bb0c7dccf1'
'0be4bc2e759d2d6136f9baa1d4624aefbababe0cbbd2d1632b76f01654d70524'
'$ungoog{repo_hash}')
'SKIP')
# Possible replacements are listed in build/linux/unbundle/replace_gn_files.py
# Keys are the names in the above script; values are the dependencies in Arch
@@ -68,21 +68,23 @@ depends+=(${_system_libs[@]})
prepare() {
local _tree="$srcdir/chromium-$pkgver"
local _user_bundle="$srcdir/chromium-$pkgver/ungoogled"
cd "$srcdir/$pkgname-$pkgver-$pkgrel"
cd "$srcdir/$pkgname-$ungoog{current_commit}"
msg2 'Processing sources'
python buildkit-launcher.py genbun -u "$_user_bundle" archlinux
python buildkit-launcher.py prubin -u "$_user_bundle" -t "$_tree"
python buildkit-launcher.py subdom -u "$_user_bundle" -t "$_tree"
ln -s ../patch_order.list "$_user_bundle/patches/series"
python -m buildkit prune -b config_bundles/archlinux "$_tree"
cd "$srcdir/chromium-$pkgver"
cd "$_tree"
msg2 'Applying build patches'
# Apply patches
env QUILT_PATCHES="$_user_bundle/patches" quilt push -a
python -m buildkit patches apply -b config_bundles/archlinux "$_tree"
cd "$srcdir/$pkgname-$ungoog{current_commit}"
msg2 'Applying domain substitution'
python -m buildkit domains apply -b config_bundles/archlinux -c domainsubcache.tar.gz "$_tree"
cd "$_tree"
# Force script incompatible with Python 3 to use /usr/bin/python2
sed -i '1s|python$|&2|' third_party/dom_distiller_js/protoc_plugins/*.py
@@ -117,16 +119,16 @@ build() {
export CCACHE_SLOPPINESS=time_macros
fi
mkdir -p $ungoog{build_output}
mkdir -p out/Default
export CC=clang
export CXX=clang++
export AR=llvm-ar
export NM=llvm-nm
local _flags=(
$ungoog{gn_flags}
)
pushd "$srcdir/$pkgname-$ungoog{current_commit}"
python -m buildkit gnargs print -b config_bundles/archlinux > "$_tree/out/Default/args.gn"
popd
# Facilitate deterministic builds (taken from build/config/compiler/BUILD.gn)
CFLAGS+=' -Wno-builtin-macro-redefined'
@@ -134,13 +136,14 @@ $ungoog{gn_flags}
CPPFLAGS+=' -D__DATE__= -D__TIME__= -D__TIMESTAMP__='
msg2 'Building GN'
python2 tools/gn/bootstrap/bootstrap.py -o $ungoog{build_output}/gn -s --no-clean
python2 tools/gn/bootstrap/bootstrap.py -o out/Default/gn -s --no-clean
msg2 'Configuring Chromium'
$ungoog{build_output}/gn gen $ungoog{build_output} --args="${_flags[*]}" \
python
out/Default/gn gen out/Default \
--script-executable=/usr/bin/python2 --fail-on-unused-args
msg2 'Building Chromium'
ninja -C $ungoog{build_output} chrome chrome_sandbox chromedriver
ninja -C out/Default chrome chrome_sandbox chromedriver
}
package() {
@@ -151,8 +154,8 @@ package() {
cd "$srcdir/chromium-$pkgver"
install -D $ungoog{build_output}/chrome "$pkgdir/usr/lib/chromium/chromium"
install -Dm4755 $ungoog{build_output}/chrome_sandbox "$pkgdir/usr/lib/chromium/chrome-sandbox"
install -D out/Default/chrome "$pkgdir/usr/lib/chromium/chromium"
install -Dm4755 out/Default/chrome_sandbox "$pkgdir/usr/lib/chromium/chrome-sandbox"
ln -s /usr/lib/$pkgname/chromedriver "$pkgdir/usr/bin/chromedriver"
install -Dm644 chrome/installer/linux/common/desktop.template \
@@ -167,13 +170,13 @@ package() {
"$pkgdir/usr/share/man/man1/chromium.1"
cp \
$ungoog{build_output}/{chrome_{100,200}_percent,resources}.pak \
$ungoog{build_output}/{*.bin,chromedriver} \
out/Default/{chrome_{100,200}_percent,resources}.pak \
out/Default/{*.bin,chromedriver} \
"$pkgdir/usr/lib/chromium/"
install -Dm644 -t "$pkgdir/usr/lib/chromium/locales" $ungoog{build_output}/locales/*.pak
install -Dm644 -t "$pkgdir/usr/lib/chromium/locales" out/Default/locales/*.pak
if [[ -z ${_system_libs[icu]+set} ]]; then
cp $ungoog{build_output}/icudtl.dat "$pkgdir/usr/lib/chromium/"
cp out/Default/icudtl.dat "$pkgdir/usr/lib/chromium/"
fi
for size in 22 24 48 64 128 256; do

View File

@@ -0,0 +1,5 @@
ungoogled-chromium-browser ($ungoog{chromium_version}-$ungoog{release_revision}~buster) buster; urgency=medium
* Built against commit $ungoog{current_commit}
-- ungoogled-chromium Authors <maintainer@null> Sun, 29 Jul 2018 00:00:00 +0000

View File

@@ -38,8 +38,8 @@ ifeq (armhf,$(DEB_HOST_ARCH))
defines+=host_cpu=\"arm\" arm_use_neon=false
endif
# auto-inserted gn flags
$ungoog{gn_flags}
# add gn flags from config bundle
defines+=$(shell debian/get_gnargs_shell config_bundles/$(shell cat debian/ungoogled-config-bundle))
# some notes about embedded libraries
# can't use system nss since net/third_party/nss is heavily patched
@@ -58,9 +58,23 @@ flotpaths=/usr/share/javascript/jquery/*min.js \
%:
dh $@
$ungoog{build_output}/gn:
mkdir -p $ungoog{build_output} || true
./tools/gn/bootstrap/bootstrap.py -o $ungoog{build_output}/gn -s -j$(njobs)
out/Default/gn:
mkdir -p out/Default || true
./tools/gn/bootstrap/bootstrap.py -o out/Default/gn -s -j$(njobs)
override_dh_quilt_patch:
pushd debian/scripts/ungoogled-chromium
python3 -m buildkit patches export -b config_bundles/$(cat ungoogled-config-bundle) ../../patches/
python3 -m buildkit prune -b config_bundles/$(cat ungoogled-config-bundle) ../../../
python3 -m buildkit domains apply -b config_bundles/$(cat ungoogled-config-bundle) -c domsubcache.tar.gz ../../../
popd
dh_quilt_patch
override_dh_quilt_unpatch:
pushd debian/ungoogled-chromium
python3 -m buildkit domains revert -c domsubcache.tar.gz ../../../
popd
dh_quilt_unpatch
override_dh_auto_configure:
# output compiler information
@@ -70,22 +84,22 @@ override_dh_auto_configure:
# strip out system third_party libraries
./debian/scripts/unbundle
override_dh_auto_build-arch: $ungoog{build_output}/gn
./$ungoog{build_output}/gn gen $ungoog{build_output} --args="$(defines)" --fail-on-unused-args
ninja -j$(njobs) -C $ungoog{build_output} chrome chrome_sandbox content_shell chromedriver
override_dh_auto_build-arch: out/Default/gn
./out/Default/gn gen out/Default --args="$(defines)" --fail-on-unused-args
ninja -j$(njobs) -C out/Default chrome chrome_sandbox content_shell chromedriver
override_dh_auto_build-indep: $ungoog{build_output}/gn
./$ungoog{build_output}/gn gen $ungoog{build_output} --args="$(defines)" --fail-on-unused-args
ninja -j$(njobs) -C $ungoog{build_output} packed_resources
override_dh_auto_build-indep: out/Default/gn
./out/Default/gn gen out/Default --args="$(defines)" --fail-on-unused-args
ninja -j$(njobs) -C out/Default packed_resources
override_dh_auto_install-arch:
cp $ungoog{build_output}/chrome $ungoog{build_output}/chromium
cp $ungoog{build_output}/content_shell $ungoog{build_output}/chromium-shell
cp $ungoog{build_output}/chrome_sandbox $ungoog{build_output}/chrome-sandbox
cp $ungoog{build_output}/locales/en-US.pak $ungoog{build_output}/resources
chmod 4755 $ungoog{build_output}/chrome-sandbox # suid sandbox
cp out/Default/chrome out/Default/chromium
cp out/Default/content_shell out/Default/chromium-shell
cp out/Default/chrome_sandbox out/Default/chrome-sandbox
cp out/Default/locales/en-US.pak out/Default/resources
chmod 4755 out/Default/chrome-sandbox # suid sandbox
sed -e s/@@PACKAGE@@/chromium/g -e s/@@MENUNAME@@/Chromium/g \
< chrome/app/resources/manpage.1.in > $ungoog{build_output}/chromium.1
< chrome/app/resources/manpage.1.in > out/Default/chromium.1
dh_auto_install
# create /etc/chromium.d README file
echo "Any files placed in this directory will be sourced prior to executing chromium." \
@@ -97,7 +111,7 @@ override_dh_auto_install-arch:
./debian/scripts/icons
override_dh_auto_install-indep:
rm -f $ungoog{build_output}/locales/en-US.pak
rm -f out/Default/locales/en-US.pak
dh_auto_install
override_dh_fixperms:

View File

@@ -0,0 +1 @@
out/Default/*.bin usr/lib/chromium

View File

@@ -0,0 +1 @@
out/Default/chromedriver usr/lib/chromium

View File

@@ -0,0 +1 @@
out/Default/locales usr/lib/chromium

View File

@@ -0,0 +1,6 @@
debian/scripts/chromium-shell usr/bin
out/Default/chromium-shell usr/lib/chromium
out/Default/content_shell.pak usr/lib/chromium
out/Default/shell_resources.pak usr/lib/chromium

Some files were not shown because too many files have changed in this diff Show More