83 Commits

Author SHA1 Message Date
Jamie Hardt
1c8feec8fe Added description to module 2023-02-28 10:52:19 -08:00
Jamie Hardt
f510f98ede Bump vers 2023-02-28 10:50:17 -08:00
Jamie Hardt
ddf1948f3c Upgraded to pyproject/flit build style 2023-02-28 10:49:52 -08:00
Jamie Hardt
1c9d373b40 Merge pull request #6 from iluvcapra/dependabot/pip/docs/certifi-2022.12.7
Bump certifi from 2022.9.24 to 2022.12.7 in /docs
2022-12-09 09:07:18 -08:00
dependabot[bot]
51b2517db1 Bump certifi from 2022.9.24 to 2022.12.7 in /docs
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.9.24 to 2022.12.7.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2022.09.24...2022.12.07)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-09 09:23:51 +00:00
Jamie Hardt
27dd8bc94d Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-20 20:24:05 -08:00
Jamie Hardt
dd394a8fec Reading project metadata from project 2022-11-20 20:23:51 -08:00
Jamie Hardt
b5571891cf Update setup.py 2022-11-20 19:08:13 -08:00
Jamie Hardt
73058e9423 Update python-package.yml
Adding Python 3.11 to the build matrix
2022-11-20 19:06:10 -08:00
Jamie Hardt
a11cda40e5 Update pythonpublish.yml 2022-11-20 14:14:26 -08:00
Jamie Hardt
7381a37185 Update pythonpublish.yml
Added hashtags to mastodon message
2022-11-20 14:13:58 -08:00
Jamie Hardt
065bd26f4c Refactored symbol 2022-11-20 13:31:10 -08:00
Jamie Hardt
7ec983f63f Refactored file name 2022-11-20 13:21:15 -08:00
Jamie Hardt
944e66728b Added some tests 2022-11-20 13:14:20 -08:00
Jamie Hardt
6473c83785 .gitignore 2022-11-20 13:03:34 -08:00
Jamie Hardt
8947d409b4 Delete .vim directory 2022-11-20 13:02:26 -08:00
Jamie Hardt
0494e771be Delete .vscode directory 2022-11-20 13:02:18 -08:00
Jamie Hardt
f00bea8702 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-20 12:55:18 -08:00
Jamie Hardt
6e82a14e4f Cleaned up requirements 2022-11-20 12:55:03 -08:00
Jamie Hardt
07669e4eca Update pythonpublish.yml
Added post to Mastodon
2022-11-20 10:53:35 -08:00
Jamie Hardt
ddc406b1eb Update toot.yml 2022-11-20 10:35:29 -08:00
Jamie Hardt
e07b3bb604 Update toot.yml 2022-11-20 10:28:13 -08:00
Jamie Hardt
c02453d10f Create toot.yml 2022-11-20 10:18:45 -08:00
Jamie Hardt
cdc8a838ac Update pythonpublish.yml 2022-11-20 10:12:53 -08:00
Jamie Hardt
e2c7408413 Update pythonpublish.yml 2022-11-20 10:08:52 -08:00
Jamie Hardt
a18154edb0 Update README.md 2022-11-20 08:25:06 -08:00
Jamie Hardt
f15ee40d37 Update README.md 2022-11-20 08:18:53 -08:00
Jamie Hardt
cd26be0c20 unfreezing importlib 2022-11-19 21:42:10 -08:00
Jamie Hardt
d50e45882b Trying to make refs look nice 2022-11-19 21:37:50 -08:00
Jamie Hardt
adb80eb174 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-19 19:04:55 -08:00
Jamie Hardt
2b91f128b9 Refactoring 2022-11-19 19:04:53 -08:00
Jamie Hardt
9f24d45f25 Documentation 2022-11-19 19:02:47 -08:00
Jamie Hardt
3a58fdba75 Some refactoring 2022-11-19 14:47:26 -08:00
Jamie Hardt
800a4dfb12 Adjust warnings 2022-11-19 14:10:30 -08:00
Jamie Hardt
6bc98063db Freeze importlib 2022-11-19 14:04:34 -08:00
Jamie Hardt
b1bf49ca82 Update LICENSE 2022-11-19 00:00:15 -08:00
Jamie Hardt
61250aaf63 Dev docs 2022-11-18 21:26:50 -08:00
Jamie Hardt
43df2c1558 Adding the whole requirements 2022-11-18 20:50:09 -08:00
Jamie Hardt
17dc868756 Hide doc from parent 2022-11-18 20:46:59 -08:00
Jamie Hardt
2e36a789b4 Twiddle docs 2022-11-18 20:39:53 -08:00
Jamie Hardt
1345113a85 Documentation 2022-11-18 20:18:26 -08:00
Jamie Hardt
76c2e24084 Developer documentation 2022-11-18 19:32:00 -08:00
Jamie Hardt
a5ed16849c Documentation 2022-11-18 19:18:08 -08:00
Jamie Hardt
4c3e103e77 Test refinements 2022-11-18 19:09:37 -08:00
Jamie Hardt
dd767b2d41 Merge branches 'master' and 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-18 18:51:48 -08:00
Jamie Hardt
aaf751c1a2 Reorganized docs into folders 2022-11-18 18:51:45 -08:00
Jamie Hardt
91e0da278f Delete .idea directory 2022-11-18 18:47:36 -08:00
Jamie Hardt
a7d01779bd Doc twiddle 2022-11-18 18:44:41 -08:00
Jamie Hardt
cb6c0c8895 Doc tweaks 2022-11-18 18:38:44 -08:00
Jamie Hardt
a2a6782214 Added note 2022-11-18 18:36:35 -08:00
Jamie Hardt
2c78d4a09d Directive implementation 2022-11-18 18:33:51 -08:00
Jamie Hardt
28cf7b5d09 Directive parsing 2022-11-18 16:59:39 -08:00
Jamie Hardt
b419814f82 Doc updates 2022-11-18 16:51:56 -08:00
Jamie Hardt
967ef5c63a Developer docs 2022-11-18 16:26:55 -08:00
Jamie Hardt
fe1a1eebd5 Docs 2022-11-18 16:20:18 -08:00
Jamie Hardt
dadeab49fe New feature doc 2022-11-18 16:14:55 -08:00
Jamie Hardt
900dd5d582 More doc work 2022-11-18 15:37:02 -08:00
Jamie Hardt
5882e01b31 Updated requirements for doc 2022-11-18 15:36:54 -08:00
Jamie Hardt
e2e86faf54 Documentation 2022-11-18 13:03:37 -08:00
Jamie Hardt
bfdefc8da0 Documentation 2022-11-18 12:23:31 -08:00
Jamie Hardt
2af9317e7e Removed refs to CSV
Added more text.
2022-11-18 11:45:58 -08:00
Jamie Hardt
9194e5ba54 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-18 11:34:11 -08:00
Jamie Hardt
528bd949ca Restructuring documenation
Swiching to readthedocs.io
2022-11-18 11:33:47 -08:00
Jamie Hardt
5633eb89f0 Update README.md 2022-11-16 21:05:03 -08:00
Jamie Hardt
29e1753b18 Tweaking this code to silence errors in the github build 2022-11-15 12:28:50 -08:00
Jamie Hardt
1df0b79ab6 Tweaked tag parsing 2022-11-15 12:26:06 -08:00
Jamie Hardt
68db6c9b09 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-15 12:15:45 -08:00
Jamie Hardt
2c664db0dd Updated requirements with latest stuff 2022-11-15 12:14:28 -08:00
Jamie Hardt
e46ac14118 Update python-package.yml 2022-11-15 12:09:58 -08:00
Jamie Hardt
bf3a5c37a8 Added conftest.py to fix pytest 2022-11-15 20:08:30 +00:00
Jamie Hardt
d3b08e9238 Addressed some lint notes 2022-11-15 20:06:11 +00:00
Jamie Hardt
c0d192e651 Delete test-coverage.sh 2022-11-15 11:47:46 -08:00
Jamie Hardt
d3cc9074c4 Update pythonpublish.yml 2022-11-15 11:27:18 -08:00
Jamie Hardt
87108c7865 Update __init__.py
Bump version
2022-11-15 10:28:42 -08:00
Jamie Hardt
04422360f0 Tweaks to quickstart 2022-11-06 14:26:08 -08:00
Jamie Hardt
cd4122ce50 Update README.md 2022-11-06 14:23:52 -08:00
Jamie Hardt
a176d3b1f5 Update README.md
Added link to Quickstart
2022-11-06 14:20:02 -08:00
Jamie Hardt
8a6f5e755b Update QUICKSTART.md 2022-11-06 14:17:07 -08:00
Jamie Hardt
b4fef4b13f Update QUICKSTART.md 2022-11-06 14:00:39 -08:00
Jamie Hardt
6fc7f26e9c Some documentation 2022-11-06 13:59:56 -08:00
Jamie Hardt
09b3f9349b Update HOWTO.md 2022-11-06 13:25:52 -08:00
Jamie Hardt
f6ee807ede Create HOWTO.md 2022-11-06 13:19:30 -08:00
Jamie Hardt
f114012d4a Attempt at some online documentation 2022-11-06 13:05:15 -08:00
44 changed files with 1077 additions and 378 deletions

View File

@@ -16,7 +16,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: [3.7, 3.8, 3.9, "3.10"]
python-version: [3.7, 3.8, 3.9, "3.10", "3.11"]
steps:
- uses: actions/checkout@v2.5.0
@@ -28,7 +28,7 @@ jobs:
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
pip install -e .
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
@@ -37,4 +37,4 @@ jobs:
flake8 ptulsconv tests --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
PYTHONPATH=. pytest
pytest

View File

@@ -2,7 +2,7 @@ name: Upload Python Package
on:
release:
types: [created]
types: [published]
jobs:
deploy:
@@ -16,7 +16,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
pip install build twine
- name: Install parsimonious
run: |
pip install parsimonious
@@ -25,5 +25,15 @@ jobs:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_UPLOAD_API_KEY }}
run: |
python setup.py sdist bdist_wheel
python -m build
twine upload dist/*
- name: Report to Mastodon
uses: cbrgm/mastodon-github-action@v1.0.1
with:
message: |
I just released a new version of ptulsconv, my ADR cue sheet generator!
#python #protools #pdf #filmmaking
${{ github.server_url }}/${{ github.repository }}
env:
MASTODON_URL: ${{ secrets.MASTODON_URL }}
MASTODON_ACCESS_TOKEN: ${{ secrets.MASTODON_ACCESS_TOKEN }}

22
.github/workflows/toot.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Test Toot
on:
workflow_dispatch:
jobs:
print-tag:
runs-on: ubuntu-latest
steps:
- name: Report to Mastodon
uses: cbrgm/mastodon-github-action@v1.0.1
env:
MASTODON_URL: ${{ secrets.MASTODON_URL }}
MASTODON_ACCESS_TOKEN: ${{ secrets.MASTODON_ACCESS_TOKEN }}
with:
message: |
This is a test toot, automatically posted by a github action.
${{ github.server_url }}/${{ github.repository }}
${{ github.ref }}

4
.gitignore vendored
View File

@@ -89,6 +89,7 @@ venv/
ENV/
env.bak/
venv.bak/
venv_docs/
# Spyder project settings
.spyderproject
@@ -105,3 +106,6 @@ venv.bak/
.DS_Store
/example/Charade/Session File Backups/
lcov.info
.vim
.vscode

66
.idea/workspace.xml generated
View File

@@ -1,66 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ChangeListManager">
<list default="true" id="68bdb183-5bdf-4b42-962e-28e58c31a89d" name="Default Changelist" comment="">
<change beforePath="$PROJECT_DIR$/.idea/misc.xml" beforeDir="false" afterPath="$PROJECT_DIR$/.idea/misc.xml" afterDir="false" />
<change beforePath="$PROJECT_DIR$/.idea/ptulsconv.iml" beforeDir="false" afterPath="$PROJECT_DIR$/.idea/ptulsconv.iml" afterDir="false" />
</list>
<option name="SHOW_DIALOG" value="false" />
<option name="HIGHLIGHT_CONFLICTS" value="true" />
<option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" />
<option name="LAST_RESOLUTION" value="IGNORE" />
</component>
<component name="Git.Settings">
<option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$" />
</component>
<component name="GitSEFilterConfiguration">
<file-type-list>
<filtered-out-file-type name="LOCAL_BRANCH" />
<filtered-out-file-type name="REMOTE_BRANCH" />
<filtered-out-file-type name="TAG" />
<filtered-out-file-type name="COMMIT_BY_MESSAGE" />
</file-type-list>
</component>
<component name="ProjectId" id="1yyIGrXKNUCYtI4PSaCWGoLG76R" />
<component name="ProjectLevelVcsManager" settingsEditedManually="true" />
<component name="ProjectViewState">
<option name="hideEmptyMiddlePackages" value="true" />
<option name="showLibraryContents" value="true" />
<option name="showMembers" value="true" />
</component>
<component name="PropertiesComponent">
<property name="RunOnceActivity.OpenProjectViewOnStart" value="true" />
<property name="RunOnceActivity.ShowReadmeOnStart" value="true" />
</component>
<component name="SpellCheckerSettings" RuntimeDictionaries="0" Folders="0" CustomDictionaries="0" DefaultDictionary="project-level" UseSingleDictionary="true" transferred="true" />
<component name="TaskManager">
<task active="true" id="Default" summary="Default task">
<changelist id="68bdb183-5bdf-4b42-962e-28e58c31a89d" name="Default Changelist" comment="" />
<created>1633217312285</created>
<option name="number" value="Default" />
<option name="presentableId" value="Default" />
<updated>1633217312285</updated>
</task>
<task id="LOCAL-00001" summary="Reorganized README a little">
<created>1633221191797</created>
<option name="number" value="00001" />
<option name="presentableId" value="LOCAL-00001" />
<option name="project" value="LOCAL" />
<updated>1633221191797</updated>
</task>
<task id="LOCAL-00002" summary="Manpage 0.8.2 bump">
<created>1633221729867</created>
<option name="number" value="00002" />
<option name="presentableId" value="LOCAL-00002" />
<option name="project" value="LOCAL" />
<updated>1633221729867</updated>
</task>
<option name="localTasksCounter" value="3" />
<servers />
</component>
<component name="VcsManagerConfiguration">
<MESSAGE value="Reorganized README a little" />
<MESSAGE value="Manpage 0.8.2 bump" />
<option name="LAST_COMMIT_MESSAGE" value="Manpage 0.8.2 bump" />
</component>
</project>

29
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,29 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.10"
# You can also specify other tool versions:
# nodejs: "16"
# rust: "1.55"
# golang: "1.17"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py
#If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
#Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/requirements.txt

View File

@@ -1,5 +0,0 @@
{
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.linting.mypyEnabled": false
}

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2019 Jamie Hardt
Copyright (c) 2022 Jamie Hardt
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,7 +1,9 @@
[![Documentation Status](https://readthedocs.org/projects/ptulsconv/badge/?version=latest)](https://ptulsconv.readthedocs.io/en/latest/?badge=latest)
![](https://img.shields.io/github/license/iluvcapra/ptulsconv.svg)
![](https://img.shields.io/pypi/pyversions/ptulsconv.svg)
[![](https://img.shields.io/pypi/v/ptulsconv.svg)][pypi]
![Lint and Test](https://github.com/iluvcapra/ptulsconv/actions/workflows/python-package.yml/badge.svg)
![GitHub last commit](https://img.shields.io/github/last-commit/iluvcapra/pycmx)
[![Lint and Test](https://github.com/iluvcapra/ptulsconv/actions/workflows/python-package.yml/badge.svg)](https://github.com/iluvcapra/ptulsconv/actions/workflows/python-package.yml)
[pypi]: https://pypi.org/project/ptulsconv/
@@ -9,39 +11,10 @@
# ptulsconv
Read Pro Tools text exports and generate PDF reports, JSON output.
## Theory of Operation
[Avid Pro Tools][avp] can be used to make spotting notes for ADR recording
sessions by creating spotting regions with descriptive text and exporting the
session as text. This file can then be dropped into Excel or any CSV-reading
app like Filemaker Pro.
**ptulsconv** accepts a text export from Pro Tools and automatically creates
PDF and CSV documents for use in ADR spotting, recording, editing and
reporting, and supplemental JSON documents can be output for use with other
workflows.
### Reports Generated by ptulsconv by Default
1. "ADR Report" lists every line in an export with most useful fields, sorted
by time.
2. "Continuity" lists every scene sorted by time.
3. "Line Count" lists a count of every line, collated by reel number and by
effort/TV/optional line designation.
4. "CSV" is a folder of files of all lines collated by character and reel
as CSV files, for use by studio cueing workflows.
5. "Director Logs" is a folder of PDFs formatted like the "ADR Report" except
collated by character.
6. "Supervisor Logs" creates a PDF report for every character, with one line
per page, optimized for note-taking.
7. "Talent Scripts" is a minimal PDF layout of just timecode and line prompt,
collated by character.
[avp]: http://www.avid.com/pro-tools
## Quick Start
For a quick overview of how to cue ADR with `ptulsconv`, check out the [Quickstart][quickstart].
## Installation
@@ -52,4 +25,6 @@ The easiest way to install on your site is to use `pip`:
This will install the necessary libraries on your host and gives you
command-line access to the tool through an entry-point `ptulsconv`. In a
terminal window type `ptulsconv -h` for a list of available options.
terminal window type `ptulsconv -h` for a list of available options.
[quickstart]: https://ptulsconv.readthedocs.io/en/latest/user/quickstart.html

0
conftest.py Normal file
View File

20
docs/Makefile Normal file
View File

@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

29
docs/requirements.txt Normal file
View File

@@ -0,0 +1,29 @@
alabaster==0.7.12
Babel==2.11.0
certifi==2022.12.7
charset-normalizer==2.1.1
docutils==0.17.1
idna==3.4
imagesize==1.4.1
Jinja2==3.1.2
MarkupSafe==2.1.1
packaging==21.3
parsimonious==0.10.0
Pillow==9.3.0
Pygments==2.13.0
pyparsing==3.0.9
pytz==2022.6
regex==2022.10.31
reportlab==3.6.12
requests==2.28.1
snowballstemmer==2.2.0
Sphinx==5.3.0
sphinx-rtd-theme==1.1.1
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
tqdm==4.64.1
urllib3==1.26.12

77
docs/source/conf.py Normal file
View File

@@ -0,0 +1,77 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
import sys
import os
sys.path.insert(0, os.path.abspath("../.."))
print(sys.path)
import ptulsconv
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'ptulsconv'
copyright = ptulsconv.__copyright__
author = ptulsconv.__author__
release = ptulsconv.__version__
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx.ext.githubpages',
]
templates_path = ['_templates']
exclude_patterns = []
master_doc = 'index'
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'sphinx_rtd_theme'
html_static_path = ['_static']
latex_documents = [
(master_doc, 'ptulsconv.tex', u'ptulsconv Documentation',
u'Jamie Hardt', 'manual'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True

View File

@@ -0,0 +1,7 @@
Contributing
============
Testing
-------
Before submitting PRs or patches, please make sure your branch passes all of the unit tests by running Pytest.

View File

@@ -0,0 +1,39 @@
Auxiliary and Helper Modules
============================
Commands Module
---------------
.. automodule:: ptulsconv.commands
:members:
Broadcast Timecode Module
-------------------------
.. automodule:: ptulsconv.broadcast_timecode
:members:
Footage Module
--------------
.. automodule:: ptulsconv.footage
:members:
Reporting Module
----------------
.. automodule:: ptulsconv.reporting
:members:
:undoc-members:
Validations Module
------------------
.. automodule:: ptulsconv.validations
:members:
:undoc-members:

View File

@@ -0,0 +1,9 @@
Parsing
=======
Docparser Classes
-----------------
.. autoclass:: ptulsconv.docparser.adr_entity.ADRLine
:members:
:undoc-members:

View File

@@ -0,0 +1,23 @@
Theory of Operation
===================
Execution Flow When Producing "doc" Output
------------------------------------------
#. The command line argv is read in :py:func:`ptulsconv.__main__.main()`,
which calls :py:func:`ptulsconv.commands.convert()`
#. :func:`ptulsconv.commands.convert()` reads the input with
:func:`ptuslconv.docparser.doc_parser_visitor()`,
which uses the ``parsimonious`` library to parse the input into an abstract
syntax tree, which the parser visitor uses to convert into a
:class:`ptulsconv.docparser.doc_entity.SessionDescriptor`,
which structures all of the data in the session output.
#. The next action based on the output format. In the
case of the "doc" output format, it runs some validations
on the input, and calls :func:`ptulsconv.commands.generate_documents()`.
#. :func:`ptulsconv.commands.generate_documents()` creates the output folder, creates the
Continuity report with :func:`ptulsconv.pdf.continuity.output_continuity()` (this document
requires some special-casing), and at the tail calls...
#. :func:`ptulsconv.commands.create_adr_reports()`, which creates folders for
(FIXME finish this)

39
docs/source/index.rst Normal file
View File

@@ -0,0 +1,39 @@
.. ptulsconv documentation master file, created by
sphinx-quickstart on Fri Nov 18 10:40:33 2022.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to ptulsconv's documentation!
=====================================
`ptulsconv` is a tool for converting Pro Tools text exports into PDF
reports for ADR spotting. It can also be used for converting text
exports into JSON documents for processing by other applications.
.. toctree::
:numbered:
:maxdepth: 2
:caption: User Documentation
user/quickstart
user/tagging
user/for_adr
user/cli_reference
.. toctree::
:numbered:
:maxdepth: 1
:caption: Developer Documentation
dev/contributing
dev/theory
dev/parsing
dev/modules
Indices and tables
==================
* :ref:`modindex`
* :ref:`genindex`
* :ref:`search`

View File

@@ -0,0 +1,74 @@
Command-Line Reference
======================
Usage Form
-----------
Invocations of ptulsconv take the following form:
ptulsconv [options] IN_FILE
Flags
-----
`-h`, `--help`
Show the help message.
`f FMT`, `--format=FMT`
Select the output format. By default this is `doc`, which will
generate :ref:`ADR reports<adr-reports>`.
The :ref:`other available options<alt-output-options>`
are `raw` and `tagged`.
Informational Options
"""""""""""""""""""""
These options display information and exit without processing any
input documents.
`--show-formats`
Display information about available output formats.
`--show-available-tags`
Display information about tags that are used by the
report generator.
.. _alt-output-options:
Alternate Output Formats
------------------------
.. _raw-output:
`raw` Output
""""""""""""
The "raw" output format is a JSON document of the parsed input data.
The document is a top-level dictionary with keys for the main sections of the text export: `header`,
`files`, `clips`, `plugins`, `tracks` and `markers`, and the values for these are a list of section
entries, or a dictionary of values, in the case of `header`.
The text values of each record and field in the text export is read and output verbatim, no further
processing is done.
.. _tagged-output:
`tagged` Output
"""""""""""""""
The "tagged" output format is also a JSON document based on the parsed input data, after the additional
step of processing all of the :ref:`tags<tags>` in the document.
The document is a top-level array of dictionaries, one for each recognized ADR spotting clip in the
session. Each dictionary has a `clip_name`, `track_name` and `session_name` key, a `tags` key that
contains a dictionary of every parsed tag (after applying tags from all tracks and markers), and a
`start` and `end` key. The `start` and `end` key contain the parsed timecode representations of these
values in rational number form, as a dictionary with `numerator` and `denominator` keys.

View File

@@ -0,0 +1,129 @@
.. _adr-reports:
`ptulsconv` For ADR Report Generation
=====================================
Reports Created by the ADR Report Generator
-------------------------------------------
(FIXME: write this)
Tags Used by the ADR Report Generator
-------------------------------------
Project-Level Tags
""""""""""""""""""
It usually makes sense to place these either in the session name,
or on a :ref:`marker <tag-marker>` at the beginning of the session, so it will apply to
all of the clips in the session.
`Title`
The title of the project. This will appear at the top
of every report.
.. warning::
`ptulsconv` at this time only supports one title per export. If you attempt to
use multiple titles in one export it will fail.
`Supv`
The supervisor of the project. This appears at the bottom
of every report.
`Client`
The client of the project. This will often appear under the
title on every report.
`Spot`
The date or version number of the spotting report.
Time Range Tags
"""""""""""""""
All of these tags can be set to different values on each clip, but
it often makes sense to use these tags in a :ref:`time range<tag-range>`.
`Sc`
The scene description. This appears on the continuity report
and is used in the Director's logs.
`Ver`
The picture version. This appears beside the spot timecodes
on most reports.
`Reel`
The reel. This appears beside the spot timecodes
on most reports and is used to summarize line totals on the
line count report.
Line tags
"""""""""
`P`
Priority.
`QN`
Cue number. This appears on all reports.
.. warning::
`ptulsconv` will verify that all cue numbers in a given title are unique.
All lines must have a cue number in order to generate reports, if any lines
do not have a cue number set, `ptulsconv` will fail.
`CN`
Character number. This is used to collate character records
and will appear on the line count and in character-collated
reports.
`Char`
Character name. By default, a clip will set this to the
name of the track it appears on, but the track name can be
overridden here.
`Actor`
Actor name.
`Line`
The prompt to appear for this ADR line. By default, this
will be whatever text appears in a clip name prior to the first
tag.
`R`
Reason.
`Mins`
Time budget for this line, in minutes. This is used in the
line count report to give estimated times for each character. This
can be set for the entire project (with a :ref:`marker <tag-marker>`), or for individual
actors (with a tag in the :ref:`track comments<tag-track>`), or can be set for
individual lines to override these.
`Shot`
Shot. A Date or other description indicating the line has been
recorded.
Boolean-valued ADR Tag Fields
"""""""""""""""""""""""""""""
`EFF`
Effort. Lines with this tag are subtotaled in the line count report.
`TV`
TV line. Lines with this tag are subtotaled in the line count report.
`TBW`
To be written.
`ADLIB`
Ad-lib.
`OPT`
Optional. Lines with this tag are subtotaled in the line count report.

View File

@@ -0,0 +1,91 @@
Quick Start
===========
The workflow for creating ADR reports in `ptulsconv` is similar to other ADR
spotting programs: spot ADR lines in Pro Tools with clips using a special
code to take notes, export the tracks as text and then run the program.
Step 1: Use Pro Tools to Spot ADR Lines
---------------------------------------
`ptulsconv` can be used to spot ADR lines similarly to other programs.
#. Create a new Pro Tools session, name this session after your project.
#. Create new tracks, one for each character. Name each track after a
character.
#. On each track, create a clip group (or edit in some audio) at the time you
would like an ADR line to appear in the report. Name the clip after the
dialogue you are replacing at that time.
Step 2: Add More Information to Your Spots
------------------------------------------
Clips, tracks and markers in your session can contain additional information
to make your ADR reports more complete and useful. You add this information
with *tagging*.
* Every ADR clip must have a unique cue number. After the name of each clip,
add the letters "$QN=" and then a unique number (any combination of letters
or numbers that don't contain a space). You can type these yourself or add
them with batch-renaming when you're done spotting.
* ADR spots should usually have a reason indicated, so you can remember exactly
why you're replacing a particular line. Do this by adding the the text "{R="
to your clip names after the prompt and then some short text describing the
reason, and then a closing "}". You can type anything, including spaces.
* If a line is a TV cover line, you can add the text "[TV]" to the end.
So for example, some ADR spot's clip name might look like:
Get to the ladder! {R=Noise} $QN=J1001
"Forget your feelings! {R=TV Cover} $QN=J1002 [TV]
These tags can appear in any order.
* You can add the name of an actor to a character's track, so this information
will appear on your reports. In the track name, or in the track comments,
type "{Actor=xxx}" replacing the xxx with the actor's name.
* Characters need to have a number (perhaps from the cast list) to express how
they should be collated. Add "$CN=xxx" with a unique number to each track (or
the track's comments.)
* Set the scene for each line with markers. Create a marker at the beginning of
a scene and make it's name "{Sc=xxx}", replacing the xxx with the scene
number and name.
Step 3: Export Tracks from Pro Tools as a Text File
---------------------------------------------------
Export the file as a UTF-8 and be sure to include clips and markers. Export
using the Timecode time format.
Do not export crossfades.
Step 4: Run `ptulsconv` on the Text Export
------------------------------------------
In your Terminal, run the following command:
ptulsconv path/to/your/TEXT_EXPORT.txt
`ptulsconv` will create a folder named "Title_CURRENT_DATE", and within that
folder it will create several PDFs and folders:
- "TITLE ADR Report" 📄 a PDF tabular report of every ADR line you've spotted.
- "TITLE Continuity" 📄 a PDF listing every scene you have indicated and its
timecode.
- "TITLE Line Count" 📄 a PDF tabular report giving line counts by reel, and the
time budget per character and reel (if provided in the tagging).
- "CSV/" a folder containing CSV documents of all spotted ADR, groupd by
character and reel.
- "Director Logs/" 📁 a folder containing PDF tabular reports, like the overall
report except groupd by character.
- "Supervisor Logs/" 📁 a folder containing PDF reports, one page per line,
designed for note taking during a session, particularly on an iPad.
- "Talent Scripts/" 📁 a folder containing PDF scripts or sides, with the timecode
and prompts for each line, grouped by character but with most other
information suppressed.

View File

@@ -0,0 +1,130 @@
.. _tags:
Tagging
=======
Tags are used to add additional data to a clip in an organized way. The
tagging system in `ptulsconv` allows is flexible and can be used to add
any kind of extra data to a clip.
Fields in Clip Names
--------------------
Track names, track comments, and clip names can also contain meta-tags, or
"fields," to add additional columns to the output. Thus, if a clip has the
name:::
`Fireworks explosion {note=Replace for final} $V=1 [FX] [DESIGN]`
The row output for this clip will contain columns for the values:
+---------------------+-------------------+---+----+--------+
| Clip Name | note | V | FX | DESIGN |
+=====================+===================+===+====+========+
| Fireworks explosion | Replace for final | 1 | FX | DESIGN |
+---------------------+-------------------+---+----+--------+
These fields can be defined in the clip name in three ways:
* `$NAME=VALUE` creates a field named `NAME` with a one-word value `VALUE`.
* `{NAME=VALUE}` creates a field named `NAME` with the value `VALUE`. `VALUE`
in this case may contain spaces or any chartacter up to the closing bracket.
* `[NAME]` creates a field named `NAME` with a value `NAME`. This can be used
to create a boolean-valued field; in the output, clips with the field
will have it, and clips without will have the column with an empty value.
For example, if three clips are named:::
`"Squad fifty-one, what is your status?" [FUTZ] {Ch=Dispatcher} [ADR]`
`"We are ten-eight at Rampart Hospital." {Ch=Gage} [ADR]`
`(1M) FC callouts rescuing trapped survivors. {Ch=Group} $QN=1001 [GROUP]`
The output will contain the range:
+----------------------------------------------+------------+------+-----+------+-------+
| Clip Name | Ch | FUTZ | ADR | QN | GROUP |
+==============================================+============+======+=====+======+=======+
| "Squad fifty-one, what is your status?" | Dispatcher | FUTZ | ADR | | |
+----------------------------------------------+------------+------+-----+------+-------+
| "We are ten-eight at Rampart Hospital." | Gage | | ADR | | |
+----------------------------------------------+------------+------+-----+------+-------+
| (1M) FC callouts rescuing trapped survivors. | Group | | | 1001 | GROUP |
+----------------------------------------------+------------+------+-----+------+-------+
.. _tag-track:
.. _tag-marker:
Fields in Track Names and Markers
---------------------------------
Fields set in track names, and in track comments, will be applied to *each*
clip on that track. If a track comment contains the text `{Dept=Foley}` for
example, every clip on that track will have a "Foley" value in a "Dept" column.
Likewise, fields set on the session name will apply to all clips in the session.
Fields set in markers, and in marker comments, will be applied to all clips
whose finish is *after* that marker. Fields in markers are applied cumulatively
from breakfast to dinner in the session. The latest marker applying to a clip has
precedence, so if one marker comes after the other, but both define a field, the
value in the later marker
An important note here is that, always, fields set on the clip name have the
highest precedence. If a field is set in a clip name, the same field set on the
track, the value set on the clip will prevail.
.. _tag-range:
Apply Fields to a Time Range of Clips
-------------------------------------
A clip name beginning with "@" will not be included in the output, but its
fields will be applied to clips within its time range on lower tracks.
If track 1 has a clip named `@ {Sc=1- The House}`, any clips beginning within
that range on lower tracks will have a field `Sc` with that value.
Combining Clips with Long Names or Many Tags
--------------------------------------------
A clip name beginning with `&` will have its parsed clip name appended to the
preceding cue, and the fields of following cues will be applied, earlier clips
having precedence. The clips need not be touching, and the clips will be
combined into a single row of the output. The start time of the first clip will
become the start time of the row, and the finish time of the last clip will
become the finish time of the row.
Setting Document Options
------------------------
.. note::
Document options are not yet implemented.
A clip beginning with `!` sends a command to `ptulsconv`. These commands can
appear anywhere in the document and apply to the entire document. Commands are
a list of words
The following commands are available:
page $SIZE=`(letter|legal|a4)`
Sets the PDF page size for the output.
font {NAME=`name`} {PATH=`path`}
Sets the primary font for the output.
sub `replacement text` {FOR=`text_to_replace`} {IN=`tag`}
Declares a substitution. Whereever text_to_replace is encountered in the
document it will be replaced with "replacement text".
If `tag` is set, this substitution will only be applied to the values of
that tag.

View File

@@ -1,18 +0,0 @@
.\" Manpage for ptulsconv
.\" Contact https://github.com/iluvcapra/ptulsconv
.TH ptulsconv 1 "15 May 2020" "0.8.2" "ptulsconv man page"
.SH NAME
.BR "ptulsconv" " \- convert
.IR "Avid Pro Tools" " text exports"
.SH SYNOPSIS
ptulsconv [OPTIONS] Export.txt
.SH DESCRIPTION
Convert a Pro Tools text export into ADR reports.
.SH OPTIONS
.IP "-h, --help"
show a help message and exit.
.TP
.RI "--show-available-tags"
Print a list of tags that are interpreted and exit.
.SH AUTHOR
Jamie Hardt (contact at https://github.com/iluvcapra/ptulsconv)

View File

@@ -1,6 +1,8 @@
from ptulsconv.docparser.ptuls_grammar import protools_text_export_grammar
"""
Parse and convert Pro Tools text exports
"""
__version__ = '1.0.2'
__author__ = 'Jamie Hardt'
__license__ = 'MIT'
__copyright__ = "%s %s (c) 2022 %s. All rights reserved." % (__name__, __version__, __author__)
__version__ = '1.0.6'
# __author__ = 'Jamie Hardt'
# __license__ = 'MIT'
# __copyright__ = "%s %s (c) 2022 %s. All rights reserved." % (__name__, __version__, __author__)

View File

@@ -7,14 +7,6 @@ from ptulsconv.commands import convert
from ptulsconv.reporting import print_status_style, print_banner_style, print_section_header_style, print_fatal_error
# TODO: Support Top-level modes
# Modes we want:
# - "raw" : Output the parsed text export document with no further processing, as json
# - "tagged"? : Output the parsed result of the TagCompiler
# - "doc" : Generate a full panoply of PDF reports contextually based on tagging
def dump_field_map(output=sys.stdout):
from ptulsconv.docparser.tag_mapping import TagMapping
from ptulsconv.docparser.adr_entity import ADRLine, GenericEvent
@@ -23,6 +15,18 @@ def dump_field_map(output=sys.stdout):
TagMapping.print_rules(ADRLine, output=output)
def dump_formats():
print_section_header_style("`raw` format:")
sys.stderr.write("A JSON document of the parsed Pro Tools export.\n")
print_section_header_style("`tagged` Format:")
sys.stderr.write("A JSON document containing one record for each clip, with\n"
"all tags parsed and all tagging rules applied. \n")
print_section_header_style("`doc` format:")
sys.stderr.write("Creates a directory with folders for different types\n"
"of ADR reports.\n\n")
def main():
"""Entry point for the command-line invocation"""
parser = OptionParser()
@@ -50,6 +54,13 @@ def main():
description='Print useful information and exit without processing '
'input files.')
informational_options.add_option('--show-formats',
dest='show_formats',
action='store_true',
default=False,
help='Display helpful information about the '
'available output formats.')
informational_options.add_option('--show-available-tags',
dest='show_tags',
action='store_true',
@@ -71,6 +82,10 @@ def main():
dump_field_map()
sys.exit(0)
elif options.show_formats:
dump_formats()
sys.exit(0)
if len(args) < 2:
print_fatal_error("Error: No input file")
parser.print_help(sys.stderr)

View File

@@ -1,11 +1,19 @@
from fractions import Fraction
import re
"""
Useful functions for parsing and working with timecode.
"""
import math
import re
from collections import namedtuple
from fractions import Fraction
from typing import Optional, SupportsFloat
class TimecodeFormat(namedtuple("_TimecodeFormat", "frame_duration logical_fps drop_frame")):
class TimecodeFormat(namedtuple("_TimecodeFormat", "frame_duration logical_fps drop_frame")):
"""
A struct reperesenting a timecode datum.
"""
def smpte_to_seconds(self, smpte: str) -> Optional[Fraction]:
frame_count = smpte_to_frame_count(smpte, self.logical_fps, drop_frame_hint=self.drop_frame)
if frame_count is None:

View File

@@ -1,3 +1,7 @@
"""
This module provides the main input document parsing and transform
implementation.
"""
import datetime
import os
@@ -26,9 +30,16 @@ from json import JSONEncoder
class MyEncoder(JSONEncoder):
"""
A subclass of :class:`JSONEncoder` which encodes :class:`Fraction` objects
as a dict.
"""
force_denominator: Optional[int]
def default(self, o):
"""
"""
if isinstance(o, Fraction):
return dict(numerator=o.numerator, denominator=o.denominator)
else:
@@ -36,6 +47,11 @@ class MyEncoder(JSONEncoder):
def output_adr_csv(lines: List[ADRLine], time_format: TimecodeFormat):
"""
Writes ADR lines as CSV to the current working directory. Creates directories
for each character number and name pair, and within that directory, creates
a CSV file for each reel.
"""
reels = set([ln.reel for ln in lines])
for n, name in [(n.character_id, n.character_name) for n in lines]:
@@ -72,16 +88,42 @@ def output_adr_csv(lines: List[ADRLine], time_format: TimecodeFormat):
writer.writerow(this_row)
os.chdir("..")
#
# def output_avid_markers(lines):
# reels = set([ln['Reel'] for ln in lines if 'Reel' in ln.keys()])
#
# for reel in reels:
# pass
def generate_documents(session_tc_format, scenes, adr_lines: Iterator[ADRLine], title):
"""
Create PDF output.
"""
print_section_header_style("Creating PDF Reports")
report_date = datetime.datetime.now()
reports_dir = "%s_%s" % (title, report_date.strftime("%Y-%m-%d_%H%M%S"))
os.makedirs(reports_dir, exist_ok=False)
os.chdir(reports_dir)
client = next((x.client for x in adr_lines), "")
supervisor = next((x.supervisor for x in adr_lines), "")
output_continuity(scenes=scenes, tc_display_format=session_tc_format,
title=title, client=client, supervisor=supervisor)
# reels = sorted([r for r in compiler.compile_all_time_spans() if r[0] == 'Reel'],
# key=lambda x: x[2])
reels = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6']
if len(adr_lines) == 0:
print_status_style("No ADR lines were found in the "
"input document. ADR reports will not be generated.")
else:
create_adr_reports(adr_lines, tc_display_format=session_tc_format,
reel_list=sorted(reels))
def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat, reel_list):
def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat, reel_list: List[str]):
"""
Creates a directory heirarchy and a respective set of ADR reports,
given a list of lines.
"""
print_status_style("Creating ADR Report")
output_summary(lines, tc_display_format=tc_display_format)
@@ -106,31 +148,20 @@ def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat,
output_adr_csv(lines, time_format=tc_display_format)
os.chdir("..")
# print_status_style("Creating Avid Marker XML files")
# os.makedirs("Avid Markers", exist_ok=True)
# os.chdir("Avid Markers")
# output_avid_markers(lines)
# os.chdir("..")
print_status_style("Creating Scripts directory and reports")
os.makedirs("Talent Scripts", exist_ok=True)
os.chdir("Talent Scripts")
output_talent_sides(lines, tc_display_format=tc_display_format)
# def parse_text_export(file):
# ast = ptulsconv.protools_text_export_grammar.parse(file.read())
# dict_parser = ptulsconv.DictionaryParserVisitor()
# parsed = dict_parser.visit(ast)
# print_status_style('Session title: %s' % parsed['header']['session_name'])
# print_status_style('Session timecode format: %f' % parsed['header']['timecode_format'])
# print_status_style('Fount %i tracks' % len(parsed['tracks']))
# print_status_style('Found %i markers' % len(parsed['markers']))
# return parsed
def convert(input_file, major_mode, output=sys.stdout, warnings=True):
"""
Primary worker function, accepts the input file and decides
what to do with it based on the `major_mode`.
:param input_file: a path to the input file.
:param major_mode: the selected output mode, 'raw', 'tagged' or 'doc'.
"""
session = parse_document(input_file)
session_tc_format = session.header.timecode_format
@@ -145,41 +176,34 @@ def convert(input_file, major_mode, output=sys.stdout, warnings=True):
if major_mode == 'tagged':
output.write(MyEncoder().encode(compiled_events))
else:
elif major_mode == 'doc':
generic_events, adr_lines = make_entities(compiled_events)
scenes = sorted([s for s in compiler.compile_all_time_spans() if s[0] == 'Sc'],
key=lambda x: x[2])
# TODO: Breakdown by titles
titles = set([x.title for x in (generic_events + adr_lines)])
assert len(titles) == 1, "Multiple titles per export is not supported"
if len(titles) != 1:
print_warning("Multiple titles per export is not supported, "
"found multiple titles: %s Exiting." % titles)
exit(-1)
print(titles)
title = list(titles)[0]
print_status_style("%i generic events found." % len(generic_events))
print_status_style("%i ADR events found." % len(adr_lines))
if warnings:
perform_adr_validations(adr_lines)
if major_mode == 'doc':
print_section_header_style("Creating PDF Reports")
report_date = datetime.datetime.now()
reports_dir = "%s_%s" % (list(titles)[0], report_date.strftime("%Y-%m-%d_%H%M%S"))
os.makedirs(reports_dir, exist_ok=False)
os.chdir(reports_dir)
generate_documents(session_tc_format, scenes, adr_lines, title)
scenes = sorted([s for s in compiler.compile_all_time_spans() if s[0] == 'Sc'],
key=lambda x: x[2])
output_continuity(scenes=scenes, tc_display_format=session_tc_format,
title=list(titles)[0], client="", supervisor="")
# reels = sorted([r for r in compiler.compile_all_time_spans() if r[0] == 'Reel'],
# key=lambda x: x[2])
reels = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6']
create_adr_reports(adr_lines,
tc_display_format=session_tc_format,
reel_list=sorted(reels))
def perform_adr_validations(lines):
def perform_adr_validations(lines : Iterator[ADRLine]):
"""
Performs validations on the input.
"""
for warning in chain(validate_unique_field(lines,
field='cue_number',
scope='title'),
@@ -196,4 +220,3 @@ def perform_adr_validations(lines):
key_field='character_id',
dependent_field='actor_name')):
print_warning(warning.report_message())

View File

@@ -1 +1,5 @@
from .doc_parser_visitor import parse_document
"""
Docparser module
"""
from .pt_doc_parser import parse_document

View File

@@ -1,3 +1,8 @@
"""
This module defines classes and methods for converting :class:`Event` objects into
:class:`ADRLine` objects.
"""
from ptulsconv.docparser.tag_compiler import Event
from typing import Optional, List, Tuple
from dataclasses import dataclass
@@ -7,6 +12,15 @@ from ptulsconv.docparser.tag_mapping import TagMapping
def make_entities(from_events: List[Event]) -> Tuple[List['GenericEvent'], List['ADRLine']]:
"""
Accepts a list of Events and converts them into either ADRLine events or
GenricEvents by calling :func:`make_entity` on each member.
:param from_events: A list of `Event` objects.
:returns: A tuple of two lists, the first containing :class:`GenericEvent` and the
second containing :class:`ADRLine`.
"""
generic_events = list()
adr_lines = list()
@@ -21,6 +35,14 @@ def make_entities(from_events: List[Event]) -> Tuple[List['GenericEvent'], List[
def make_entity(from_event: Event) -> Optional[object]:
"""
Accepts an event and creates either an :class:`ADRLine` or a
:class:`GenericEvent`. An event is an "ADRLine" if it has a cue number/"QN"
tag field.
:param from_event: An :class:`Event`.
"""
instance = GenericEvent
tag_map = GenericEvent.tag_mapping
if 'QN' in from_event.tags.keys():
@@ -67,6 +89,7 @@ class GenericEvent:
@dataclass
class ADRLine(GenericEvent):
priority: Optional[int] = None
cue_number: Optional[str] = None
character_id: Optional[str] = None
@@ -109,30 +132,4 @@ class ADRLine(GenericEvent):
formatter=(lambda x: len(x) > 0))
]
# def __init__(self):
# self.title = None
# self.supervisor = None
# self.client = None
# self.scene = None
# self.version = None
# self.reel = None
# self.start = None
# self.finish = None
# self.priority = None
# self.cue_number = None
# self.character_id = None
# self.character_name = None
# self.actor_name = None
# self.prompt = None
# self.reason = None
# self.requested_by = None
# self.time_budget_mins = None
# self.note = None
# self.spot = None
# self.shot = None
# self.effort = False
# self.tv = False
# self.tbw = False
# self.omitted = False
# self.adlib = False
# self.optional = False

View File

@@ -1,16 +1,90 @@
from parsimonious.nodes import NodeVisitor
from parsimonious.grammar import Grammar
from .doc_entity import SessionDescriptor, HeaderDescriptor, TrackDescriptor, FileDescriptor, \
TrackClipDescriptor, ClipDescriptor, PluginDescriptor, MarkerDescriptor
protools_text_export_grammar = Grammar(
r"""
document = header files_section? clips_section? plugin_listing? track_listing? markers_listing?
header = "SESSION NAME:" fs string_value rs
"SAMPLE RATE:" fs float_value rs
"BIT DEPTH:" fs integer_value "-bit" rs
"SESSION START TIMECODE:" fs string_value rs
"TIMECODE FORMAT:" fs frame_rate " Drop"? " Frame" rs
"# OF AUDIO TRACKS:" fs integer_value rs
"# OF AUDIO CLIPS:" fs integer_value rs
"# OF AUDIO FILES:" fs integer_value rs block_ending
frame_rate = ("60" / "59.94" / "30" / "29.97" / "25" / "24" / "23.976")
files_section = files_header files_column_header file_record* block_ending
files_header = "F I L E S I N S E S S I O N" rs
files_column_header = "Filename" isp fs "Location" rs
file_record = string_value fs string_value rs
clips_section = clips_header clips_column_header clip_record* block_ending
clips_header = "O N L I N E C L I P S I N S E S S I O N" rs
clips_column_header = string_value fs string_value rs
clip_record = string_value fs string_value (fs "[" integer_value "]")? rs
plugin_listing = plugin_header plugin_column_header plugin_record* block_ending
plugin_header = "P L U G - I N S L I S T I N G" rs
plugin_column_header = "MANUFACTURER " fs "PLUG-IN NAME " fs
"VERSION " fs "FORMAT " fs "STEMS " fs
"NUMBER OF INSTANCES" rs
plugin_record = string_value fs string_value fs string_value fs
string_value fs string_value fs string_value rs
track_listing = track_listing_header track_block*
track_block = track_list_top ( track_clip_entry / block_ending )*
track_listing_header = "T R A C K L I S T I N G" rs
track_list_top = "TRACK NAME:" fs string_value rs
"COMMENTS:" fs string_value rs
"USER DELAY:" fs integer_value " Samples" rs
"STATE: " track_state_list rs
("PLUG-INS: " ( fs string_value )* rs)?
"CHANNEL " fs "EVENT " fs "CLIP NAME " fs
"START TIME " fs "END TIME " fs "DURATION " fs
("TIMESTAMP " fs)? "STATE" rs
track_state_list = (track_state " ")*
track_state = "Solo" / "Muted" / "Inactive" / "Hidden"
track_clip_entry = integer_value isp fs
integer_value isp fs
string_value fs
string_value fs string_value fs string_value fs (string_value fs)?
track_clip_state rs
track_clip_state = ("Muted" / "Unmuted")
markers_listing = markers_listing_header markers_column_header marker_record*
markers_listing_header = "M A R K E R S L I S T I N G" rs
markers_column_header = "# " fs "LOCATION " fs "TIME REFERENCE " fs
"UNITS " fs "NAME " fs "COMMENTS" rs
marker_record = integer_value isp fs string_value fs integer_value isp fs
string_value fs string_value fs string_value rs
fs = "\t"
rs = "\n"
block_ending = rs rs
string_value = ~r"[^\t\n]*"
integer_value = ~r"\d+"
float_value = ~r"\d+(\.\d+)?"
isp = ~r"[^\d\t\n]*"
""")
def parse_document(path: str) -> SessionDescriptor:
"""
Parse a Pro Tools text export.
:param path: path to a file
:return: the session descriptor
"""
from .ptuls_grammar import protools_text_export_grammar
with open(path, 'r') as f:
ast = protools_text_export_grammar.parse(f.read())
return DocParserVisitor().visit(ast)

View File

@@ -1,74 +0,0 @@
from parsimonious.grammar import Grammar
protools_text_export_grammar = Grammar(
r"""
document = header files_section? clips_section? plugin_listing? track_listing? markers_listing?
header = "SESSION NAME:" fs string_value rs
"SAMPLE RATE:" fs float_value rs
"BIT DEPTH:" fs integer_value "-bit" rs
"SESSION START TIMECODE:" fs string_value rs
"TIMECODE FORMAT:" fs frame_rate " Drop"? " Frame" rs
"# OF AUDIO TRACKS:" fs integer_value rs
"# OF AUDIO CLIPS:" fs integer_value rs
"# OF AUDIO FILES:" fs integer_value rs block_ending
frame_rate = ("60" / "59.94" / "30" / "29.97" / "25" / "24" / "23.976")
files_section = files_header files_column_header file_record* block_ending
files_header = "F I L E S I N S E S S I O N" rs
files_column_header = "Filename" isp fs "Location" rs
file_record = string_value fs string_value rs
clips_section = clips_header clips_column_header clip_record* block_ending
clips_header = "O N L I N E C L I P S I N S E S S I O N" rs
clips_column_header = string_value fs string_value rs
clip_record = string_value fs string_value (fs "[" integer_value "]")? rs
plugin_listing = plugin_header plugin_column_header plugin_record* block_ending
plugin_header = "P L U G - I N S L I S T I N G" rs
plugin_column_header = "MANUFACTURER " fs "PLUG-IN NAME " fs
"VERSION " fs "FORMAT " fs "STEMS " fs
"NUMBER OF INSTANCES" rs
plugin_record = string_value fs string_value fs string_value fs
string_value fs string_value fs string_value rs
track_listing = track_listing_header track_block*
track_block = track_list_top ( track_clip_entry / block_ending )*
track_listing_header = "T R A C K L I S T I N G" rs
track_list_top = "TRACK NAME:" fs string_value rs
"COMMENTS:" fs string_value rs
"USER DELAY:" fs integer_value " Samples" rs
"STATE: " track_state_list rs
("PLUG-INS: " ( fs string_value )* rs)?
"CHANNEL " fs "EVENT " fs "CLIP NAME " fs
"START TIME " fs "END TIME " fs "DURATION " fs
("TIMESTAMP " fs)? "STATE" rs
track_state_list = (track_state " ")*
track_state = "Solo" / "Muted" / "Inactive" / "Hidden"
track_clip_entry = integer_value isp fs
integer_value isp fs
string_value fs
string_value fs string_value fs string_value fs (string_value fs)?
track_clip_state rs
track_clip_state = ("Muted" / "Unmuted")
markers_listing = markers_listing_header markers_column_header marker_record*
markers_listing_header = "M A R K E R S L I S T I N G" rs
markers_column_header = "# " fs "LOCATION " fs "TIME REFERENCE " fs
"UNITS " fs "NAME " fs "COMMENTS" rs
marker_record = integer_value isp fs string_value fs integer_value isp fs
string_value fs string_value fs string_value rs
fs = "\t"
rs = "\n"
block_ending = rs rs
string_value = ~r"[^\t\n]*"
integer_value = ~r"\d+"
float_value = ~r"\d+(\.\d+)?"
isp = ~r"[^\d\t\n]*"
""")

View File

@@ -19,6 +19,10 @@ class Event:
class TagCompiler:
"""
Uses a `SessionDescriptor` as a data source to produce `Intermediate`
items.
"""
Intermediate = namedtuple('Intermediate', 'track_content track_tags track_comment_tags '
'clip_content clip_tags clip_tag_mode start finish')
@@ -26,6 +30,9 @@ class TagCompiler:
session: doc_entity.SessionDescriptor
def compile_all_time_spans(self) -> List[Tuple[str, str, Fraction, Fraction]]:
"""
:returns: A `List` of (key: str, value: str, start: Fraction, finish: Fraction)
"""
ret_list = list()
for element in self.parse_data():
if element.clip_tag_mode == TagPreModes.TIMESPAN:
@@ -61,10 +68,11 @@ class TagCompiler:
def compile_events(self) -> Iterator[Event]:
step0 = self.parse_data()
step1 = self.apply_appends(step0)
step2 = self.collect_time_spans(step1)
step3 = self.apply_tags(step2)
for datum in step3:
step1 = self.filter_out_directives(step0)
step2 = self.apply_appends(step1)
step3 = self.collect_time_spans(step2)
step4 = self.apply_tags(step3)
for datum in step4:
yield Event(clip_name=datum[0], track_name=datum[1], session_name=datum[2],
tags=datum[3], start=datum[4], finish=datum[5])
@@ -77,6 +85,14 @@ class TagCompiler:
return retval
def filter_out_directives(self, clips : Iterator[Intermediate]) -> Iterator[Intermediate]:
for clip in clips:
if clip.clip_tag_mode == 'Directive':
continue
else:
yield clip
@staticmethod
def _coalesce_tags(clip_tags: dict, track_tags: dict,
track_comment_tags: dict,

View File

@@ -1,5 +1,5 @@
from parsimonious import NodeVisitor, Grammar
from typing import Dict, Optional
from typing import Dict, Union
from enum import Enum
@@ -7,6 +7,7 @@ class TagPreModes(Enum):
NORMAL = 'Normal'
APPEND = 'Append'
TIMESPAN = 'Timespan'
DIRECTIVE = 'Directive'
tag_grammar = Grammar(
@@ -23,7 +24,7 @@ tag_grammar = Grammar(
tag_junk = word word_sep?
word = ~r"[^ \[\{\$][^ ]*"
word_sep = ~r" +"
modifier = ("@" / "&") word_sep?
modifier = ("@" / "&" /"!") word_sep?
"""
)
@@ -51,7 +52,7 @@ class TagListVisitor(NodeVisitor):
modifier_opt, line_opt, _, tag_list_opt = visited_children
return TaggedStringResult(content=next(iter(line_opt), None),
tag_dict=next(iter(tag_list_opt), None),
tag_dict=next(iter(tag_list_opt), dict()),
mode=TagPreModes(next(iter(modifier_opt), 'Normal'))
)
@@ -65,6 +66,8 @@ class TagListVisitor(NodeVisitor):
return TagPreModes.TIMESPAN
elif node.text.startswith('&'):
return TagPreModes.APPEND
elif node.text.startswith('!'):
return TagPreModes.DIRECTIVE
else:
return TagPreModes.NORMAL

View File

@@ -1,8 +1,20 @@
"""
Methods for converting string reprentations of film footage.
"""
from fractions import Fraction
import re
from typing import Optional
def footage_to_seconds(footage: str) -> Optional[Fraction]:
"""
Converts a string representation of a footage (35mm, 24fps)
into a :class:`Fraction`, this fraction being a some number of
seconds.
:param footage: A string reprenentation of a footage of the form
resembling "90+01".
"""
m = re.match(r'(\d+)\+(\d+)(\.\d+)?', footage)
if m is None:
return None

View File

@@ -36,11 +36,11 @@ def table_for_scene(scene, tc_format, font_name = 'Helvetica'):
def output_report(scenes: List[Tuple[str, str, Fraction, Fraction]],
tc_display_format: TimecodeFormat,
title: str, client: str, supervisor):
title: str, client: str, supervisor, paper_size = letter):
filename = "%s Continuity.pdf" % title
document_header = "Continuity"
doc = make_doc_template(page_size=portrait(letter),
doc = make_doc_template(page_size=portrait(paper_size),
filename=filename,
document_title="Continuity",
title=title,

View File

@@ -1,3 +1,9 @@
"""
Reporting logic. These methods provide reporting methods to the package and
take some pains to provide nice-looking escape codes if we're writing to a
tty.
"""
import sys

View File

@@ -1,3 +1,7 @@
"""
Validation logic for enforcing various consistency rules.
"""
from dataclasses import dataclass
from ptulsconv.docparser.adr_entity import ADRLine
from typing import Iterator, Optional

43
pyproject.toml Normal file
View File

@@ -0,0 +1,43 @@
[build-system]
requires = ["flit_core >=3.2,<4"]
build-backend = "flit_core.buildapi"
[project]
name = "ptulsconv"
authors = [
{name = "Jamie Hardt", email = "jamiehardt@me.com"},
]
readme = "README.md"
license = { file = "LICENSE" }
classifiers = [
'License :: OSI Approved :: MIT License',
'Topic :: Multimedia',
'Topic :: Multimedia :: Sound/Audio',
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Development Status :: 5 - Production/Stable",
"Topic :: Text Processing :: Filters"
]
requires-python = ">=3.7"
dynamic = ["version", "description"]
keywords = ["text-processing", "parsers", "film",
"broadcast", "editing", "editorial"]
dependencies = ['parsimonious', 'tqdm', 'reportlab']
[project.optional-dependencies]
doc = [
"Sphinx ~= 5.3.0",
"sphinx-rtd-theme >= 1.1.1"
]
[project.scripts]
flit = "ptulsconv.__main__:main"
[project.urls]
Source = 'https://github.com/iluvcapra/ptulsconv'
Issues = 'https://github.com/iluvcapra/ptulsconv/issues'
Documentation = 'https://ptulsconv.readthedocs.io/'

View File

@@ -1,15 +0,0 @@
astroid==2.9.3
isort==5.10.1
lazy-object-proxy==1.7.1
mccabe==0.6.1
parsimonious==0.9.0
Pillow==9.1.1
platformdirs==2.4.1
pylint==2.12.2
regex==2022.6.2
reportlab==3.6.10
six==1.16.0
toml==0.10.2
tqdm==4.64.0
typing_extensions==4.0.1
wrapt==1.13.3

View File

@@ -1,43 +0,0 @@
from setuptools import setup
from ptulsconv import __author__, __license__, __version__
with open("README.md", "r") as fh:
long_description = fh.read()
setup(name='ptulsconv',
version=__version__,
author=__author__,
description='Parse and convert Pro Tools text exports',
long_description_content_type="text/markdown",
long_description=long_description,
license=__license__,
url='https://github.com/iluvcapra/ptulsconv',
project_urls={
'Source':
'https://github.com/iluvcapra/ptulsconv',
'Issues':
'https://github.com/iluvcapra/ptulsconv/issues',
},
classifiers=[
'License :: OSI Approved :: MIT License',
'Topic :: Multimedia',
'Topic :: Multimedia :: Sound/Audio',
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Development Status :: 4 - Beta",
"Topic :: Text Processing :: Filters"],
packages=['ptulsconv'],
keywords='text-processing parsers film tv editing editorial',
install_requires=['parsimonious', 'tqdm', 'reportlab'],
package_data={
"ptulsconv": ["xslt/*.xsl"]
},
entry_points={
'console_scripts': [
'ptulsconv = ptulsconv.__main__:main'
]
}
)

View File

@@ -1,4 +0,0 @@
#!/bin/bash
coverage run -m pytest . ; coverage-lcov

View File

@@ -8,7 +8,7 @@ import glob
from ptulsconv import commands
class TestBroadcastTimecode(unittest.TestCase):
class TestPDFExport(unittest.TestCase):
def test_report_generation(self):
"""
Setp through every text file in export_cases and make sure it can

View File

@@ -70,6 +70,16 @@ class TestBroadcastTimecode(unittest.TestCase):
s1 = tc_format.seconds_to_smpte(secs)
self.assertEqual(s1, "00:00:01:01")
def test_unparseable_footage(self):
time_str = "10.1"
s1 = broadcast_timecode.footage_to_frame_count(time_str)
self.assertIsNone(s1)
def test_unparseable_timecode(self):
time_str = "11.32-19"
s1 = broadcast_timecode.smpte_to_frame_count(time_str, frames_per_logical_second=24)
self.assertIsNone(s1)
if __name__ == '__main__':
unittest.main()

View File

@@ -1,5 +1,5 @@
import unittest
from ptulsconv.docparser import doc_entity, doc_parser_visitor, ptuls_grammar, tag_compiler
from ptulsconv.docparser import doc_entity, pt_doc_parser, tag_compiler
import os.path
@@ -8,8 +8,8 @@ class TaggingIntegratedTests(unittest.TestCase):
def test_event_list(self):
with open(self.path, 'r') as f:
document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast)
document_ast = pt_doc_parser.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = pt_doc_parser.DocParserVisitor().visit(document_ast)
compiler = tag_compiler.TagCompiler()
compiler.session = document
@@ -28,8 +28,8 @@ class TaggingIntegratedTests(unittest.TestCase):
def test_append(self):
with open(self.path, 'r') as f:
document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast)
document_ast = pt_doc_parser.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = pt_doc_parser.DocParserVisitor().visit(document_ast)
compiler = tag_compiler.TagCompiler()
compiler.session = document
@@ -51,8 +51,8 @@ class TaggingIntegratedTests(unittest.TestCase):
def test_successive_appends(self):
with open(self.path, 'r') as f:
document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast)
document_ast = pt_doc_parser.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = pt_doc_parser.DocParserVisitor().visit(document_ast)
compiler = tag_compiler.TagCompiler()
compiler.session = document