156 Commits

Author SHA1 Message Date
Jamie Hardt
4a0d19ade1 Added 3.13 to classifiers. 2025-05-25 07:58:55 -07:00
Jamie Hardt
df6c783c51 Added 3.13 to test matrix 2025-05-25 07:57:42 -07:00
Jamie Hardt
f0b232b2b6 autopep 2025-05-25 07:54:50 -07:00
Jamie Hardt
519c6403ba Nudged version 2025-05-25 07:52:11 -07:00
Jamie Hardt
d29c36eafa Fixed some bugs I introduced, fixed entrypoint 2025-05-25 07:50:37 -07:00
Jamie Hardt
2095a1fb75 Nudged version 2025-05-25 07:24:07 -07:00
Jamie Hardt
70defcc46c fixed typo in copyright line 2025-05-25 07:23:25 -07:00
Jamie Hardt
d156b6df89 Added notes 2025-05-25 07:21:50 -07:00
Jamie Hardt
3ba9d7933e Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2025-05-25 07:16:45 -07:00
Jamie Hardt
b0c40ee0b6 Merged 2025-05-25 07:16:35 -07:00
Jamie Hardt
921b0f07af Update pyproject.toml
Fixing sphinx dependencies
2025-05-25 07:13:11 -07:00
Jamie Hardt
57764bc859 Update conf.py 2025-05-25 07:05:24 -07:00
Jamie Hardt
779c93282c Update conf.py
Updated copyright message
2025-05-25 07:04:17 -07:00
Jamie Hardt
9684be6c7e Update __init__.py 2025-05-25 07:03:28 -07:00
Jamie Hardt
484a70fc8e Update __init__.py 2025-05-25 07:01:47 -07:00
Jamie Hardt
5aa005c317 Update conf.py 2025-05-25 07:00:44 -07:00
Jamie Hardt
454adea3d1 Merge pull request #13 from iluvcapra/maint-poetry
Upgrade build tool to Poetry
2025-05-24 22:25:53 -07:00
Jamie Hardt
1e6546dab5 Tweak file for flake 2025-05-24 22:24:45 -07:00
Jamie Hardt
8b262d3bfb Rearranged pyproject, brought in metadata 2025-05-24 22:22:04 -07:00
Jamie Hardt
630e7960dc Making changes for peotry 2025-05-24 22:20:15 -07:00
Jamie Hardt
aa7b418121 Update __init__.py
Nudging version to 2.2.1
2025-05-24 21:58:50 -07:00
Jamie Hardt
a519a525b2 Update pythonpublish.yml
Updating python publish action to the latest version
2025-05-24 21:54:42 -07:00
Jamie Hardt
1412efe509 autopep 2025-05-18 13:39:06 -07:00
Jamie Hardt
12a6c05467 autopep 2025-05-18 13:37:46 -07:00
Jamie Hardt
cf87986014 autopep'd test 2025-05-18 13:35:12 -07:00
Jamie Hardt
67533879f8 Rewrote parsing to handle old & new-style markers 2025-05-18 13:33:51 -07:00
Jamie Hardt
f847b88aa3 Nudged version and copyright date 2025-05-17 12:06:56 -07:00
Jamie Hardt
c3a600c5d7 Integrated track marker test case and fixed parser 2025-05-17 12:05:27 -07:00
Jamie Hardt
914783a809 Updated documentation 2025-05-17 11:26:07 -07:00
Jamie Hardt
c638c673e8 Adding track marker export case 2025-05-17 11:23:54 -07:00
Jamie Hardt
15fe6667af Fixed up unit test 2025-05-17 11:23:02 -07:00
Jamie Hardt
d4e23b59eb Adding support for track markers
(Always ignore for now)
2025-05-17 11:19:22 -07:00
Jamie Hardt
a602b09551 flake8 2025-05-17 10:47:21 -07:00
Jamie Hardt
448d93d717 Fix for flake 2025-05-17 10:45:40 -07:00
Jamie Hardt
59e7d40d97 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2025-05-11 22:19:26 -07:00
Jamie Hardt
eaa5fe824f Fixed parser logic to handle new-style marker tracks 2025-05-11 22:17:42 -07:00
Jamie Hardt
8ebfd32e02 Update __init__.py
Nudge version
2024-07-10 21:16:45 -07:00
Jamie Hardt
83a9adb48a Merge remote-tracking branch 'refs/remotes/origin/master' 2023-11-15 23:09:27 -08:00
Jamie Hardt
013ebcbe75 movie options 2023-11-15 23:09:12 -08:00
Jamie Hardt
c87695e5fe Merge pull request #12 from iluvcapra/maint-py312
Add Python 3.12 support
2023-11-08 10:33:13 -08:00
Jamie Hardt
4a8983cbbb Update python-package.yml
Added 3.12 to test matrix
2023-11-08 10:27:56 -08:00
Jamie Hardt
9123cbd0b5 Update pyproject.toml
Added 3.12 classifier
2023-11-08 10:27:04 -08:00
Jamie Hardt
4224d106b0 Fixed compyright notice 2023-11-04 11:50:03 -07:00
Jamie Hardt
ac22fab97f Some style fixes (all E231) 2023-11-04 11:36:34 -07:00
Jamie Hardt
64ca2c6c5c Silenced some more errors 2023-11-04 11:23:40 -07:00
Jamie Hardt
c3af30dc6a Renamed my JSONEncoder something useful 2023-11-04 11:21:59 -07:00
Jamie Hardt
c30f675cec Cleared up a type warning 2023-11-04 11:17:48 -07:00
Jamie Hardt
204af7d9cb A bunch of typo cleanups and styling. 2023-11-04 11:13:49 -07:00
Jamie Hardt
10fc211e80 Some typos 2023-11-04 10:56:44 -07:00
Jamie Hardt
d56c7df376 Updated documentation to reflect current usage
No longer have to output a text export.
Some formatting changes.
2023-11-04 10:49:56 -07:00
Jamie Hardt
7b38449a5f Fixed formatting of a list. 2023-11-04 10:43:21 -07:00
Jamie Hardt
17b87b6e69 Update __init__.py
Nudged version
2023-07-27 23:23:39 -07:00
Jamie Hardt
a636791539 Autopep 2023-07-27 23:17:23 -07:00
Jamie Hardt
dfde3c4493 Fixed errors with track_index field
In tests
2023-07-27 23:15:49 -07:00
Jamie Hardt
81909c8a51 Added track index to TrackDescriptor
to indicate a track's import order.
2023-07-27 22:58:06 -07:00
Jamie Hardt
e2b9a20870 Added some documentation 2023-07-27 22:10:29 -07:00
Jamie Hardt
006cec05e5 Merge pull request #10 from iluvcapra/bug-flake8
Flake8 code cleanups and a bug fix
2023-07-22 13:01:15 -07:00
Jamie Hardt
a95f0b5cca Nudged version number 2023-07-22 12:58:32 -07:00
Jamie Hardt
70a5206d73 Fixed dumb typo that made ptsl break 2023-07-21 22:21:48 -07:00
Jamie Hardt
128eed002d Update README.md 2023-07-21 14:54:54 -07:00
Jamie Hardt
f8a0d70942 Update README.md
Dumb typo in "last commit" badge
2023-07-21 14:54:23 -07:00
Jamie Hardt
5f29e95ba9 Merge pull request #8 from iluvcapra/bug-flake8
Add Flake8 to build tests, clean up code style
2023-07-21 14:26:53 -07:00
Jamie Hardt
82f07b13a6 Do not warn on unsued imports in __init__ 2023-07-21 14:25:17 -07:00
Jamie Hardt
fbcbba1098 flake8 2023-07-21 14:20:35 -07:00
Jamie Hardt
622f04963f Update python-package.yml
Added flake8 to the build
2023-07-21 14:04:35 -07:00
Jamie Hardt
5b36dcb5aa flake8 2023-07-21 14:03:05 -07:00
Jamie Hardt
fd02d962d0 flake8 2023-07-21 13:45:47 -07:00
Jamie Hardt
2021159666 flake8 fixes 2023-07-21 13:38:24 -07:00
Jamie Hardt
f825b92586 Flake8 cleanups 2023-07-21 13:21:01 -07:00
Jamie Hardt
4318946596 Merge pull request #7 from iluvcapra/require-py-3.8
Eliminate Python 3.7 Support
2023-07-21 12:57:46 -07:00
Jamie Hardt
2a98954885 Update __init__.py 2023-07-21 12:56:13 -07:00
Jamie Hardt
79d8cc5b69 Update python-package.yml 2023-07-21 12:53:43 -07:00
Jamie Hardt
5785dc3364 Update pyproject.toml
Requires Python 3.8
2023-07-21 12:51:12 -07:00
Jamie Hardt
4e64edcd85 Updated tests 2023-07-21 12:44:42 -07:00
Jamie Hardt
58277367c5 Implemeneted direct reading session data with PTSL 2023-07-21 12:33:59 -07:00
Jamie Hardt
617f34a515 Fixing publish script to use pypi 2023-06-02 19:37:12 -07:00
Jamie Hardt
5427b4cfb1 BUmped version number and copyright 2023-06-02 19:25:07 -07:00
Jamie Hardt
408829e820 Fixed numerous errors with build 2023-06-02 19:23:07 -07:00
Jamie Hardt
b65401d25f Fixing doc build 2023-02-28 12:24:30 -08:00
Jamie Hardt
50fe3e2c0a Fixing doc build 2023-02-28 12:21:39 -08:00
Jamie Hardt
1c8feec8fe Added description to module 2023-02-28 10:52:19 -08:00
Jamie Hardt
f510f98ede Bump vers 2023-02-28 10:50:17 -08:00
Jamie Hardt
ddf1948f3c Upgraded to pyproject/flit build style 2023-02-28 10:49:52 -08:00
Jamie Hardt
1c9d373b40 Merge pull request #6 from iluvcapra/dependabot/pip/docs/certifi-2022.12.7
Bump certifi from 2022.9.24 to 2022.12.7 in /docs
2022-12-09 09:07:18 -08:00
dependabot[bot]
51b2517db1 Bump certifi from 2022.9.24 to 2022.12.7 in /docs
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.9.24 to 2022.12.7.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2022.09.24...2022.12.07)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-09 09:23:51 +00:00
Jamie Hardt
27dd8bc94d Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-20 20:24:05 -08:00
Jamie Hardt
dd394a8fec Reading project metadata from project 2022-11-20 20:23:51 -08:00
Jamie Hardt
b5571891cf Update setup.py 2022-11-20 19:08:13 -08:00
Jamie Hardt
73058e9423 Update python-package.yml
Adding Python 3.11 to the build matrix
2022-11-20 19:06:10 -08:00
Jamie Hardt
a11cda40e5 Update pythonpublish.yml 2022-11-20 14:14:26 -08:00
Jamie Hardt
7381a37185 Update pythonpublish.yml
Added hashtags to mastodon message
2022-11-20 14:13:58 -08:00
Jamie Hardt
065bd26f4c Refactored symbol 2022-11-20 13:31:10 -08:00
Jamie Hardt
7ec983f63f Refactored file name 2022-11-20 13:21:15 -08:00
Jamie Hardt
944e66728b Added some tests 2022-11-20 13:14:20 -08:00
Jamie Hardt
6473c83785 .gitignore 2022-11-20 13:03:34 -08:00
Jamie Hardt
8947d409b4 Delete .vim directory 2022-11-20 13:02:26 -08:00
Jamie Hardt
0494e771be Delete .vscode directory 2022-11-20 13:02:18 -08:00
Jamie Hardt
f00bea8702 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-20 12:55:18 -08:00
Jamie Hardt
6e82a14e4f Cleaned up requirements 2022-11-20 12:55:03 -08:00
Jamie Hardt
07669e4eca Update pythonpublish.yml
Added post to Mastodon
2022-11-20 10:53:35 -08:00
Jamie Hardt
ddc406b1eb Update toot.yml 2022-11-20 10:35:29 -08:00
Jamie Hardt
e07b3bb604 Update toot.yml 2022-11-20 10:28:13 -08:00
Jamie Hardt
c02453d10f Create toot.yml 2022-11-20 10:18:45 -08:00
Jamie Hardt
cdc8a838ac Update pythonpublish.yml 2022-11-20 10:12:53 -08:00
Jamie Hardt
e2c7408413 Update pythonpublish.yml 2022-11-20 10:08:52 -08:00
Jamie Hardt
a18154edb0 Update README.md 2022-11-20 08:25:06 -08:00
Jamie Hardt
f15ee40d37 Update README.md 2022-11-20 08:18:53 -08:00
Jamie Hardt
cd26be0c20 unfreezing importlib 2022-11-19 21:42:10 -08:00
Jamie Hardt
d50e45882b Trying to make refs look nice 2022-11-19 21:37:50 -08:00
Jamie Hardt
adb80eb174 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-19 19:04:55 -08:00
Jamie Hardt
2b91f128b9 Refactoring 2022-11-19 19:04:53 -08:00
Jamie Hardt
9f24d45f25 Documentation 2022-11-19 19:02:47 -08:00
Jamie Hardt
3a58fdba75 Some refactoring 2022-11-19 14:47:26 -08:00
Jamie Hardt
800a4dfb12 Adjust warnings 2022-11-19 14:10:30 -08:00
Jamie Hardt
6bc98063db Freeze importlib 2022-11-19 14:04:34 -08:00
Jamie Hardt
b1bf49ca82 Update LICENSE 2022-11-19 00:00:15 -08:00
Jamie Hardt
61250aaf63 Dev docs 2022-11-18 21:26:50 -08:00
Jamie Hardt
43df2c1558 Adding the whole requirements 2022-11-18 20:50:09 -08:00
Jamie Hardt
17dc868756 Hide doc from parent 2022-11-18 20:46:59 -08:00
Jamie Hardt
2e36a789b4 Twiddle docs 2022-11-18 20:39:53 -08:00
Jamie Hardt
1345113a85 Documentation 2022-11-18 20:18:26 -08:00
Jamie Hardt
76c2e24084 Developer documentation 2022-11-18 19:32:00 -08:00
Jamie Hardt
a5ed16849c Documentation 2022-11-18 19:18:08 -08:00
Jamie Hardt
4c3e103e77 Test refinements 2022-11-18 19:09:37 -08:00
Jamie Hardt
dd767b2d41 Merge branches 'master' and 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-18 18:51:48 -08:00
Jamie Hardt
aaf751c1a2 Reorganized docs into folders 2022-11-18 18:51:45 -08:00
Jamie Hardt
91e0da278f Delete .idea directory 2022-11-18 18:47:36 -08:00
Jamie Hardt
a7d01779bd Doc twiddle 2022-11-18 18:44:41 -08:00
Jamie Hardt
cb6c0c8895 Doc tweaks 2022-11-18 18:38:44 -08:00
Jamie Hardt
a2a6782214 Added note 2022-11-18 18:36:35 -08:00
Jamie Hardt
2c78d4a09d Directive implementation 2022-11-18 18:33:51 -08:00
Jamie Hardt
28cf7b5d09 Directive parsing 2022-11-18 16:59:39 -08:00
Jamie Hardt
b419814f82 Doc updates 2022-11-18 16:51:56 -08:00
Jamie Hardt
967ef5c63a Developer docs 2022-11-18 16:26:55 -08:00
Jamie Hardt
fe1a1eebd5 Docs 2022-11-18 16:20:18 -08:00
Jamie Hardt
dadeab49fe New feature doc 2022-11-18 16:14:55 -08:00
Jamie Hardt
900dd5d582 More doc work 2022-11-18 15:37:02 -08:00
Jamie Hardt
5882e01b31 Updated requirements for doc 2022-11-18 15:36:54 -08:00
Jamie Hardt
e2e86faf54 Documentation 2022-11-18 13:03:37 -08:00
Jamie Hardt
bfdefc8da0 Documentation 2022-11-18 12:23:31 -08:00
Jamie Hardt
2af9317e7e Removed refs to CSV
Added more text.
2022-11-18 11:45:58 -08:00
Jamie Hardt
9194e5ba54 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-18 11:34:11 -08:00
Jamie Hardt
528bd949ca Restructuring documenation
Swiching to readthedocs.io
2022-11-18 11:33:47 -08:00
Jamie Hardt
5633eb89f0 Update README.md 2022-11-16 21:05:03 -08:00
Jamie Hardt
29e1753b18 Tweaking this code to silence errors in the github build 2022-11-15 12:28:50 -08:00
Jamie Hardt
1df0b79ab6 Tweaked tag parsing 2022-11-15 12:26:06 -08:00
Jamie Hardt
68db6c9b09 Merge branch 'master' of https://github.com/iluvcapra/ptulsconv 2022-11-15 12:15:45 -08:00
Jamie Hardt
2c664db0dd Updated requirements with latest stuff 2022-11-15 12:14:28 -08:00
Jamie Hardt
e46ac14118 Update python-package.yml 2022-11-15 12:09:58 -08:00
Jamie Hardt
bf3a5c37a8 Added conftest.py to fix pytest 2022-11-15 20:08:30 +00:00
Jamie Hardt
d3b08e9238 Addressed some lint notes 2022-11-15 20:06:11 +00:00
Jamie Hardt
c0d192e651 Delete test-coverage.sh 2022-11-15 11:47:46 -08:00
Jamie Hardt
d3cc9074c4 Update pythonpublish.yml 2022-11-15 11:27:18 -08:00
Jamie Hardt
87108c7865 Update __init__.py
Bump version
2022-11-15 10:28:42 -08:00
Jamie Hardt
04422360f0 Tweaks to quickstart 2022-11-06 14:26:08 -08:00
Jamie Hardt
cd4122ce50 Update README.md 2022-11-06 14:23:52 -08:00
64 changed files with 1967 additions and 1081 deletions

4
.flake8 Normal file
View File

@@ -0,0 +1,4 @@
[flake8]
per-file-ignores =
ptulsconv/__init__.py: F401
ptulsconv/docparser/__init__.py: F401

View File

@@ -16,7 +16,7 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
python-version: [3.7, 3.8, 3.9, "3.10"] python-version: [3.8, 3.9, "3.10", "3.11", "3.12", "3.13"]
steps: steps:
- uses: actions/checkout@v2.5.0 - uses: actions/checkout@v2.5.0
@@ -28,7 +28,7 @@ jobs:
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
python -m pip install flake8 pytest python -m pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi pip install -e .
- name: Lint with flake8 - name: Lint with flake8
run: | run: |
# stop the build if there are Python syntax errors or undefined names # stop the build if there are Python syntax errors or undefined names
@@ -37,4 +37,5 @@ jobs:
flake8 ptulsconv tests --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics flake8 ptulsconv tests --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest - name: Test with pytest
run: | run: |
PYTHONPATH=. pytest pytest
flake8 ptulsconv

View File

@@ -2,28 +2,38 @@ name: Upload Python Package
on: on:
release: release:
types: [created] types: [published]
permissions:
contents: read
id-token: write
jobs: jobs:
deploy: deploy:
runs-on: ubuntu-latest runs-on: ubuntu-latest
environment:
name: release
steps: steps:
- uses: actions/checkout@v2.5.0 - uses: actions/checkout@v3.5.2
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4.3.0 uses: actions/setup-python@v4.6.0
with: with:
python-version: '3.x' python-version: '3.x'
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install setuptools wheel twine pip install build
- name: Install parsimonious - name: Build package
run: | run: python -m build
pip install parsimonious - name: pypi-publish
- name: Build and publish uses: pypa/gh-action-pypi-publish@v1.12.4
env: # - name: Report to Mastodon
TWINE_USERNAME: __token__ # uses: cbrgm/mastodon-github-action@v1.0.1
TWINE_PASSWORD: ${{ secrets.PYPI_UPLOAD_API_KEY }} # with:
run: | # message: |
python setup.py sdist bdist_wheel # I just released a new version of ptulsconv, my ADR cue sheet generator!
twine upload dist/* # #python #protools #pdf #filmmaking
# ${{ github.server_url }}/${{ github.repository }}
# env:
# MASTODON_URL: ${{ secrets.MASTODON_URL }}
# MASTODON_ACCESS_TOKEN: ${{ secrets.MASTODON_ACCESS_TOKEN }}

22
.github/workflows/toot.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Test Toot
on:
workflow_dispatch:
jobs:
print-tag:
runs-on: ubuntu-latest
steps:
- name: Report to Mastodon
uses: cbrgm/mastodon-github-action@v1.0.1
env:
MASTODON_URL: ${{ secrets.MASTODON_URL }}
MASTODON_ACCESS_TOKEN: ${{ secrets.MASTODON_ACCESS_TOKEN }}
with:
message: |
This is a test toot, automatically posted by a github action.
${{ github.server_url }}/${{ github.repository }}
${{ github.ref }}

4
.gitignore vendored
View File

@@ -89,6 +89,7 @@ venv/
ENV/ ENV/
env.bak/ env.bak/
venv.bak/ venv.bak/
venv_docs/
# Spyder project settings # Spyder project settings
.spyderproject .spyderproject
@@ -105,3 +106,6 @@ venv.bak/
.DS_Store .DS_Store
/example/Charade/Session File Backups/ /example/Charade/Session File Backups/
lcov.info lcov.info
.vim
.vscode

66
.idea/workspace.xml generated
View File

@@ -1,66 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ChangeListManager">
<list default="true" id="68bdb183-5bdf-4b42-962e-28e58c31a89d" name="Default Changelist" comment="">
<change beforePath="$PROJECT_DIR$/.idea/misc.xml" beforeDir="false" afterPath="$PROJECT_DIR$/.idea/misc.xml" afterDir="false" />
<change beforePath="$PROJECT_DIR$/.idea/ptulsconv.iml" beforeDir="false" afterPath="$PROJECT_DIR$/.idea/ptulsconv.iml" afterDir="false" />
</list>
<option name="SHOW_DIALOG" value="false" />
<option name="HIGHLIGHT_CONFLICTS" value="true" />
<option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" />
<option name="LAST_RESOLUTION" value="IGNORE" />
</component>
<component name="Git.Settings">
<option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$" />
</component>
<component name="GitSEFilterConfiguration">
<file-type-list>
<filtered-out-file-type name="LOCAL_BRANCH" />
<filtered-out-file-type name="REMOTE_BRANCH" />
<filtered-out-file-type name="TAG" />
<filtered-out-file-type name="COMMIT_BY_MESSAGE" />
</file-type-list>
</component>
<component name="ProjectId" id="1yyIGrXKNUCYtI4PSaCWGoLG76R" />
<component name="ProjectLevelVcsManager" settingsEditedManually="true" />
<component name="ProjectViewState">
<option name="hideEmptyMiddlePackages" value="true" />
<option name="showLibraryContents" value="true" />
<option name="showMembers" value="true" />
</component>
<component name="PropertiesComponent">
<property name="RunOnceActivity.OpenProjectViewOnStart" value="true" />
<property name="RunOnceActivity.ShowReadmeOnStart" value="true" />
</component>
<component name="SpellCheckerSettings" RuntimeDictionaries="0" Folders="0" CustomDictionaries="0" DefaultDictionary="project-level" UseSingleDictionary="true" transferred="true" />
<component name="TaskManager">
<task active="true" id="Default" summary="Default task">
<changelist id="68bdb183-5bdf-4b42-962e-28e58c31a89d" name="Default Changelist" comment="" />
<created>1633217312285</created>
<option name="number" value="Default" />
<option name="presentableId" value="Default" />
<updated>1633217312285</updated>
</task>
<task id="LOCAL-00001" summary="Reorganized README a little">
<created>1633221191797</created>
<option name="number" value="00001" />
<option name="presentableId" value="LOCAL-00001" />
<option name="project" value="LOCAL" />
<updated>1633221191797</updated>
</task>
<task id="LOCAL-00002" summary="Manpage 0.8.2 bump">
<created>1633221729867</created>
<option name="number" value="00002" />
<option name="presentableId" value="LOCAL-00002" />
<option name="project" value="LOCAL" />
<updated>1633221729867</updated>
</task>
<option name="localTasksCounter" value="3" />
<servers />
</component>
<component name="VcsManagerConfiguration">
<MESSAGE value="Reorganized README a little" />
<MESSAGE value="Manpage 0.8.2 bump" />
<option name="LAST_COMMIT_MESSAGE" value="Manpage 0.8.2 bump" />
</component>
</project>

32
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,32 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.10"
# You can also specify other tool versions:
# nodejs: "16"
# rust: "1.55"
# golang: "1.17"
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py
#If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
#Optionally declare the Python requirements required to build your docs
python:
install:
- method: pip
path: .
extra_requirements:
- doc

View File

@@ -1,5 +0,0 @@
{
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.linting.mypyEnabled": false
}

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2019 Jamie Hardt Copyright (c) 2022 Jamie Hardt
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,51 +1,20 @@
[![Documentation Status](https://readthedocs.org/projects/ptulsconv/badge/?version=latest)](https://ptulsconv.readthedocs.io/en/latest/?badge=latest)
![](https://img.shields.io/github/license/iluvcapra/ptulsconv.svg) ![](https://img.shields.io/github/license/iluvcapra/ptulsconv.svg)
![](https://img.shields.io/pypi/pyversions/ptulsconv.svg) ![](https://img.shields.io/pypi/pyversions/ptulsconv.svg)
[![](https://img.shields.io/pypi/v/ptulsconv.svg)][pypi] [![](https://img.shields.io/pypi/v/ptulsconv.svg)][pypi]
![Lint and Test](https://github.com/iluvcapra/ptulsconv/actions/workflows/python-package.yml/badge.svg) ![GitHub last commit](https://img.shields.io/github/last-commit/iluvcapra/ptulsconv)
[![Lint and Test](https://github.com/iluvcapra/ptulsconv/actions/workflows/python-package.yml/badge.svg)](https://github.com/iluvcapra/ptulsconv/actions/workflows/python-package.yml)
[pypi]: https://pypi.org/project/ptulsconv/ [pypi]: https://pypi.org/project/ptulsconv/
# ptulsconv # ptulsconv
Read Pro Tools text exports and generate PDF reports, JSON output. Parse Pro Tools text exports and generate PDF reports, JSON output.
## Quick Start ## Quick Start
For a quick overview of how to cue ADR with `ptulsconv`, check out the [Quickstart](doc/QUICKSTART.md). For a quick overview of how to cue ADR with `ptulsconv`, check out the [Quickstart][quickstart].
## Theory of Operation
[Avid Pro Tools][avp] can be used to make spotting notes for ADR recording
sessions by creating spotting regions with descriptive text and exporting the
session as text. This file can then be dropped into Excel or any CSV-reading
app like Filemaker Pro.
**ptulsconv** accepts a text export from Pro Tools and automatically creates
PDF and CSV documents for use in ADR spotting, recording, editing and
reporting, and supplemental JSON documents can be output for use with other
workflows.
### Reports Generated by ptulsconv by Default
1. "ADR Report" lists every line in an export with most useful fields, sorted
by time.
2. "Continuity" lists every scene sorted by time.
3. "Line Count" lists a count of every line, collated by reel number and by
effort/TV/optional line designation.
4. "CSV" is a folder of files of all lines collated by character and reel
as CSV files, for use by studio cueing workflows.
5. "Director Logs" is a folder of PDFs formatted like the "ADR Report" except
collated by character.
6. "Supervisor Logs" creates a PDF report for every character, with one line
per page, optimized for note-taking.
7. "Talent Scripts" is a minimal PDF layout of just timecode and line prompt,
collated by character.
[avp]: http://www.avid.com/pro-tools
## Installation ## Installation
@@ -57,3 +26,5 @@ The easiest way to install on your site is to use `pip`:
This will install the necessary libraries on your host and gives you This will install the necessary libraries on your host and gives you
command-line access to the tool through an entry-point `ptulsconv`. In a command-line access to the tool through an entry-point `ptulsconv`. In a
terminal window type `ptulsconv -h` for a list of available options. terminal window type `ptulsconv -h` for a list of available options.
[quickstart]: https://ptulsconv.readthedocs.io/en/latest/user/quickstart.html

View File

@@ -1,92 +0,0 @@
# How To Use `ptulsconv`
## Theory of Operation
[Avid Pro Tools][avp] exports a tab-delimited text file organized in multiple
parts with an uneven syntax that usually can't "drop in" to other tools like
Excel or Filemaker. `ptulsconv` will accept a text export from Pro Tools and,
by default, create a set of PDF reports useful for ADR reporting.
## Tagging
### Fields in Clip Names
Track names, track comments, and clip names can also contain meta-tags, or
"fields," to add additional columns to the CSV output. Thus, if a clip has the
name:
`Fireworks explosion {note=Replace for final} $V=1 [FX] [DESIGN]`
The row output for this clip will contain columns for the values:
|...| Clip Name| note | V | FX | DESIGN | ...|
|---|------------|------|---|----|--------|----|
|...| Fireworks explosion| Replace for final | 1 | FX | DESIGN | ... |
These fields can be defined in the clip name in three ways:
* `$NAME=VALUE` creates a field named `NAME` with a one-word value `VALUE`.
* `{NAME=VALUE}` creates a field named `NAME` with the value `VALUE`. `VALUE`
in this case may contain spaces or any chartacter up to the closing bracket.
* `[NAME]` creates a field named `NAME` with a value `NAME`. This can be used
to create a boolean-valued field; in the CSV output, clips with the field
will have it, and clips without will have the column with an empty value.
For example, if two clips are named:
`"Squad fifty-one, what is your status?" [FUTZ] {Ch=Dispatcher} [ADR]`
`"We are ten-eight at Rampart Hospital." {Ch=Gage} [ADR]`
The output will contain the range:
|...| PT.Clip.Name| Ch | FUTZ | ADR | ...|
|---|------------|------|---|----|-----|
|...| "Squad fifty-one, what is your status?"| Dispatcher | FUTZ | ADR | ... |
|...| "We are ten-eight at Rampart Hospital."| Gage | | ADR | ... |
### Fields in Track Names and Markers
Fields set in track names, and in track comments, will be applied to *each*
clip on that track. If a track comment contains the text `{Dept=Foley}` for
example, every clip on that track will have a "Foley" value in a "Dept" column.
Likewise, fields set on the session name will apply to all clips in the session.
Fields set in markers, and in marker comments, will be applied to all clips
whose finish is *after* that marker. Fields in markers are applied cumulatively
from breakfast to dinner in the session. The latest marker applying to a clip has
precedence, so if one marker comes after the other, but both define a field, the
value in the later marker
An important note here is that, always, fields set on the clip name have the
highest precedence. If a field is set in a clip name, the same field set on the
track, the value set on the clip will prevail.
### Using `@` to Apply Fields to a Span of Clips
A clip name beginning with "@" will not be included in the CSV output, but its
fields will be applied to clips within its time range on lower tracks.
If track 1 has a clip named `@ {Sc=1- The House}`, any clips beginning within
that range on lower tracks will have a field `Sc` with that value.
### Using `&` to Combine Clips
A clip name beginning with `&` will have its parsed clip name appended to the
preceding cue, and the fields of following cues will be applied, earlier clips
having precedence. The clips need not be touching, and the clips will be
combined into a single row of the output. The start time of the first clip will
become the start time of the row, and the finish time of the last clip will
become the finish time of the row.
## What is `ptulsconv` Useful For?
The main purpose of `ptulsconv` is to read a Pro Tools text export and convert
it into PDFs useful for ADR recording.
## Is it useful for anything else?

View File

@@ -1,86 +0,0 @@
# Quick Start for ADR Spotting and Reporting with `ptulsconv`
## Step 1: Use Pro Tools to spot ADR Lines
`ptulsconv` can be used to spot ADR lines similarly to other programs.
1. Create a new Pro Tools session, name this session after your project.
1. Create new tracks, one for each character. Name each track after a
character.
1. On each track, create a clip group (or edit in some audio) at the time you
would like an ADR line to appear in the report. Name the clip after the
dialogue you are replacing at that time.
## Step 2: Add More Information to Your Spots
Clips, tracks and markers in your session can contain additional information
to make your ADR reports more complete and useful. You add this information
with *tagging*.
- Every ADR clip must have a unique cue number. After the name of each clip,
add the letters "$QN=" and then a unique number (any combination of letters
or numbers that don't contain a space). You can type these yourself or add
them with batch-renaming when you're done spotting.
- ADR spots should usually have a reason indicated, so you can remember exactly
why you're replacing a particular line. Do this by adding the the text "{R="
to your clip names after the prompt and then some short text describing the
reason, and then a closing "}". You can type anything, including spaces.
- If a line is a TV cover line, you can add the text "[TV]" to the end.
So for example, some ADR spot's clip name might look like:
"Get to the ladder! {R=Noise} $QN=J1001"
"Forget your feelings! {R=TV Cover} $QN=J1002 [TV]"
These tags can appear in any order.
- You can add the name of an actor to a character's track, so this information
will appear on your reports. In the track name, or in the track comments,
type "{Actor=xxx}" replacing the xxx with the actor's name.
- Characters need to have a number (perhaps from the cast list) to express how
they should be collated. Add "$CN=xxx" with a unique number to each track (or
the track's comments.)
- Set the scene for each line with markers. Create a marker at the beginning of
a scene and make it's name "{Sc=xxx}", replacing the xxx with the scene
number and name.
Many tags are available to express different details of each line, like
priority, time budget, picture version and reel, notes etc. charater or the
project, find them by running `ptulsconv` with the `--show-available-tags`
option.
## Step 3: Export Relevant Tracks from Pro Tools as a Text File
Export the file as a UTF-8 and be sure to include clips and markers. Export
using the Timecode time format.
Do not export crossfades.
## Step 4: Run `ptulsconv` on the Text Export
In your Terminal, run the following command:
ptulsconv path/to/your/TEXT_EXPORT.txt
`ptulsconv` will create a folder named "Title_CURRENT_DATE", and within that
folder it will create several PDFs and folders:
- "TITLE ADR Report" 📄 a PDF tabular report of every ADR line you've spotted.
- "TITLE Continuity" 📄 a PDF listing every scene you have indicated and its
timecode.
- "TITLE Line Count" 📄 a PDF tabular report giving line counts by reel, and the
time budget per character and reel (if provided in the tagging).
- "CSV/" a folder containing CSV documents of all spotted ADR, groupd by
character and reel.
- "Director Logs/" 📁 a folder containing PDF tabular reports, like the overall
report except groupd by character.
- "Supervisor Logs/" 📁 a folder containing PDF reports, one page per line,
designed for note taking during a session, particularly on an iPad.
- "Talent Scripts/" 📁 a folder containing PDF scripts or sides, with the timecode
and prompts for each line, grouped by character but with most other
information suppressed.

20
docs/Makefile Normal file
View File

@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

78
docs/source/conf.py Normal file
View File

@@ -0,0 +1,78 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
import importlib
import sys
import os
sys.path.insert(0, os.path.abspath("../.."))
print(sys.path)
import ptulsconv
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'ptulsconv'
copyright = '2019-2025 Jamie Hardt. All rights reserved'
version = "Version 2"
release = importlib.metadata.version("ptulsconv")
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx.ext.githubpages',
]
templates_path = ['_templates']
exclude_patterns = []
master_doc = 'index'
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'sphinx_rtd_theme'
html_static_path = ['_static']
latex_documents = [
(master_doc, 'ptulsconv.tex', u'ptulsconv Documentation',
u'Jamie Hardt', 'manual'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True

View File

@@ -0,0 +1,7 @@
Contributing
============
Testing
-------
Before submitting PRs or patches, please make sure your branch passes all of the unit tests by running Pytest.

View File

@@ -0,0 +1,39 @@
Auxiliary and Helper Modules
============================
Commands Module
---------------
.. automodule:: ptulsconv.commands
:members:
Broadcast Timecode Module
-------------------------
.. automodule:: ptulsconv.broadcast_timecode
:members:
Footage Module
--------------
.. automodule:: ptulsconv.footage
:members:
Reporting Module
----------------
.. automodule:: ptulsconv.reporting
:members:
:undoc-members:
Validations Module
------------------
.. automodule:: ptulsconv.validations
:members:
:undoc-members:

View File

@@ -0,0 +1,9 @@
Parsing
=======
Docparser Classes
-----------------
.. autoclass:: ptulsconv.docparser.adr_entity.ADRLine
:members:
:undoc-members:

View File

@@ -0,0 +1,23 @@
Theory of Operation
===================
Execution Flow When Producing "doc" Output
------------------------------------------
#. The command line argv is read in :py:func:`ptulsconv.__main__.main()`,
which calls :py:func:`ptulsconv.commands.convert()`
#. :func:`ptulsconv.commands.convert()` reads the input with
:func:`ptuslconv.docparser.doc_parser_visitor()`,
which uses the ``parsimonious`` library to parse the input into an abstract
syntax tree, which the parser visitor uses to convert into a
:class:`ptulsconv.docparser.doc_entity.SessionDescriptor`,
which structures all of the data in the session output.
#. The next action based on the output format. In the
case of the "doc" output format, it runs some validations
on the input, and calls :func:`ptulsconv.commands.generate_documents()`.
#. :func:`ptulsconv.commands.generate_documents()` creates the output folder, creates the
Continuity report with :func:`ptulsconv.pdf.continuity.output_continuity()` (this document
requires some special-casing), and at the tail calls...
#. :func:`ptulsconv.commands.create_adr_reports()`, which creates folders for
(FIXME finish this)

39
docs/source/index.rst Normal file
View File

@@ -0,0 +1,39 @@
.. ptulsconv documentation master file, created by
sphinx-quickstart on Fri Nov 18 10:40:33 2022.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to ptulsconv's documentation!
=====================================
`ptulsconv` is a tool for converting Pro Tools text exports into PDF
reports for ADR spotting. It can also be used for converting text
exports into JSON documents for processing by other applications.
.. toctree::
:numbered:
:maxdepth: 2
:caption: User Documentation
user/quickstart
user/tagging
user/for_adr
user/cli_reference
.. toctree::
:numbered:
:maxdepth: 1
:caption: Developer Documentation
dev/contributing
dev/theory
dev/parsing
dev/modules
Indices and tables
==================
* :ref:`modindex`
* :ref:`genindex`
* :ref:`search`

View File

@@ -0,0 +1,79 @@
Command-Line Reference
======================
Usage Form
-----------
Invocations of ptulsconv take the following form:
ptulsconv [options] [IN_FILE]
`IN_FILE` is a Pro Tools text export in UTF-8 encoding. If `IN_FILE` is
missing, `ptulsconv` will attempt to connect to Pro Tools and read cue data
from the selected tracks of the currently-open session.
Flags
-----
`-h`, `--help`
Show the help message.
`f FMT`, `--format=FMT`
Select the output format. By default this is `doc`, which will
generate :ref:`ADR reports<adr-reports>`.
The :ref:`other available options<alt-output-options>`
are `raw` and `tagged`.
Informational Options
"""""""""""""""""""""
These options display information and exit without processing any
input documents.
`--show-formats`
Display information about available output formats.
`--show-available-tags`
Display information about tags that are used by the
report generator.
.. _alt-output-options:
Alternate Output Formats
------------------------
.. _raw-output:
`raw` Output
""""""""""""
The "raw" output format is a JSON document of the parsed input data.
The document is a top-level dictionary with keys for the main sections of the text export: `header`,
`files`, `clips`, `plugins`, `tracks` and `markers`, and the values for these are a list of section
entries, or a dictionary of values, in the case of `header`.
The text values of each record and field in the text export is read and output verbatim, no further
processing is done.
.. _tagged-output:
`tagged` Output
"""""""""""""""
The "tagged" output format is also a JSON document based on the parsed input data, after the additional
step of processing all of the :ref:`tags<tags>` in the document.
The document is a top-level array of dictionaries, one for each recognized ADR spotting clip in the
session. Each dictionary has a `clip_name`, `track_name` and `session_name` key, a `tags` key that
contains a dictionary of every parsed tag (after applying tags from all tracks and markers), and a
`start` and `end` key. The `start` and `end` key contain the parsed timecode representations of these
values in rational number form, as a dictionary with `numerator` and `denominator` keys.

View File

@@ -0,0 +1,129 @@
.. _adr-reports:
`ptulsconv` For ADR Report Generation
=====================================
Reports Created by the ADR Report Generator
-------------------------------------------
(FIXME: write this)
Tags Used by the ADR Report Generator
-------------------------------------
Project-Level Tags
""""""""""""""""""
It usually makes sense to place these either in the session name,
or on a :ref:`marker <tag-marker>` at the beginning of the session, so it will apply to
all of the clips in the session.
`Title`
The title of the project. This will appear at the top
of every report.
.. warning::
`ptulsconv` at this time only supports one title per export. If you attempt to
use multiple titles in one export it will fail.
`Supv`
The supervisor of the project. This appears at the bottom
of every report.
`Client`
The client of the project. This will often appear under the
title on every report.
`Spot`
The date or version number of the spotting report.
Time Range Tags
"""""""""""""""
All of these tags can be set to different values on each clip, but
it often makes sense to use these tags in a :ref:`time range<tag-range>`.
`Sc`
The scene description. This appears on the continuity report
and is used in the Director's logs.
`Ver`
The picture version. This appears beside the spot timecodes
on most reports.
`Reel`
The reel. This appears beside the spot timecodes
on most reports and is used to summarize line totals on the
line count report.
Line tags
"""""""""
`P`
Priority.
`QN`
Cue number. This appears on all reports.
.. warning::
`ptulsconv` will verify that all cue numbers in a given title are unique.
All lines must have a cue number in order to generate reports, if any lines
do not have a cue number set, `ptulsconv` will fail.
`CN`
Character number. This is used to collate character records
and will appear on the line count and in character-collated
reports.
`Char`
Character name. By default, a clip will set this to the
name of the track it appears on, but the track name can be
overridden here.
`Actor`
Actor name.
`Line`
The prompt to appear for this ADR line. By default, this
will be whatever text appears in a clip name prior to the first
tag.
`R`
Reason.
`Mins`
Time budget for this line, in minutes. This is used in the
line count report to give estimated times for each character. This
can be set for the entire project (with a :ref:`marker <tag-marker>`), or for individual
actors (with a tag in the :ref:`track comments<tag-track>`), or can be set for
individual lines to override these.
`Shot`
Shot. A Date or other description indicating the line has been
recorded.
Boolean-valued ADR Tag Fields
"""""""""""""""""""""""""""""
`EFF`
Effort. Lines with this tag are subtotaled in the line count report.
`TV`
TV line. Lines with this tag are subtotaled in the line count report.
`TBW`
To be written.
`ADLIB`
Ad-lib.
`OPT`
Optional. Lines with this tag are subtotaled in the line count report.

View File

@@ -0,0 +1,87 @@
Quick Start
===========
The workflow for creating ADR reports in `ptulsconv` is similar to other ADR
spotting programs: spot ADR lines in Pro Tools with clips using a special
code to take notes, export the tracks as text and then run the program.
Step 1: Use Pro Tools to Spot ADR Lines
---------------------------------------
`ptulsconv` can be used to spot ADR lines similarly to other programs.
#. Create a new Pro Tools session, name this session after your project.
#. Create new tracks, one for each character. Name each track after a
character.
#. On each track, create a clip group (or edit in some audio) at the time you
would like an ADR line to appear in the report. Name the clip after the
dialogue you are replacing at that time.
Step 2: Add More Information to Your Spots
------------------------------------------
Clips, tracks and markers in your session can contain additional information
to make your ADR reports more complete and useful. You add this information
with :ref:`tagging<tags>`.
* **Every ADR clip must have a unique cue number.** After the name of each
clip, add the letters ``$QN=`` and then a unique number (any combination of
letters or numbers that don't contain a space). You can type these yourself
or add them with batch-renaming when you're done spotting.
* ADR spots should usually have a reason indicated, so you can remember exactly
why you're replacing a particular line. Do this by adding the the text
``{R=`` to your clip names after the prompt and then some short text
describing the reason, and then a closing ``}``. You can type anything,
including spaces.
* If, for example, a line is a TV cover line, you can add the text ``[TV]`` to
the end.
So for example, some ADR spot's clip name might look like::
Get to the ladder! {R=Noise} $QN=J1001
"Forget your feelings! {R=TV Cover} $QN=J1002 [TV]
These tags can appear in any order.
* You can add the name of an actor to a character's track, so this information
will appear on your reports. In the track name, or in the track comments,
type ``{Actor=xxx}`` replacing the xxx with the actor's name.
* Characters need to have a number (perhaps from the cast list) to express how
they should be collated. Add ``$CN=xxx`` with
a unique number to each track (or the track's comments.)
* Set the scene for each line with markers. Create a marker at the beginning of
a scene and make it's name ``{Sc=xxx}``, replacing the xxx with the scene
number and name.
Step 3: Run `ptulsconv`
------------------------
In Pro Tools, select the tracks that contain your spot clips.
Then, in your Terminal, run the following command::
ptulsconv
`ptulsconv` will connect to Pro Tools and read all of the clips on the selected
track. It will then create a folder named "Title_CURRENT_DATE", and within that
folder it will create several PDFs and folders:
- "TITLE ADR Report" 📄 a PDF tabular report of every ADR line you've spotted.
- "TITLE Continuity" 📄 a PDF listing every scene you have indicated and its
timecode.
- "TITLE Line Count" 📄 a PDF tabular report giving line counts by reel, and the
time budget per character and reel (if provided in the tagging).
- "CSV/" a folder containing CSV documents of all spotted ADR, groupd by
character and reel.
- "Director Logs/" 📁 a folder containing PDF tabular reports, like the overall
report except groupd by character.
- "Supervisor Logs/" 📁 a folder containing PDF reports, one page per line,
designed for note taking during a session, particularly on an iPad.
- "Talent Scripts/" 📁 a folder containing PDF scripts or sides, with the timecode
and prompts for each line, grouped by character but with most other
information suppressed.

View File

@@ -0,0 +1,138 @@
.. _tags:
Tagging
=======
Tags are used to add additional data to a clip in an organized way. The
tagging system in `ptulsconv` is flexible and can be used to add any kind of
extra data to a clip.
Fields in Clip Names
--------------------
Track names, track comments, and clip names can also contain meta-tags, or
"fields," to add additional columns to the output. Thus, if a clip has the
name:::
Fireworks explosion {note=Replace for final} $V=1 [FX] [DESIGN]
The row output for this clip will contain columns for the values:
+---------------------+-------------------+---+----+--------+
| Clip Name | note | V | FX | DESIGN |
+=====================+===================+===+====+========+
| Fireworks explosion | Replace for final | 1 | FX | DESIGN |
+---------------------+-------------------+---+----+--------+
These fields can be defined in the clip name in three ways:
* ``$NAME=VALUE`` creates a field named ``NAME`` with a one-word value
``VALUE``.
* ``{NAME=VALUE}`` creates a field named ``NAME`` with the value ``VALUE``.
``VALUE`` in this case may contain spaces or any chartacter up to the
closing bracket.
* ``[NAME]`` creates a field named ``NAME`` with a value ``NAME``. This can
be used to create a boolean-valued field; in the output, clips with the
field will have it, and clips without will have the column with an empty
value.
For example, if three clips are named:::
"Squad fifty-one, what is your status?" [FUTZ] {Ch=Dispatcher} [ADR]
"We are ten-eight at Rampart Hospital." {Ch=Gage} [ADR]
(1M) FC callouts rescuing trapped survivors. {Ch=Group} $QN=1001 [GROUP]
The output will contain the range:
+----------------------------------------------+------------+------+-----+------+-------+
| Clip Name | Ch | FUTZ | ADR | QN | GROUP |
+==============================================+============+======+=====+======+=======+
| "Squad fifty-one, what is your status?" | Dispatcher | FUTZ | ADR | | |
+----------------------------------------------+------------+------+-----+------+-------+
| "We are ten-eight at Rampart Hospital." | Gage | | ADR | | |
+----------------------------------------------+------------+------+-----+------+-------+
| (1M) FC callouts rescuing trapped survivors. | Group | | | 1001 | GROUP |
+----------------------------------------------+------------+------+-----+------+-------+
.. _tag-track:
.. _tag-marker:
Fields in Track Names and Markers
---------------------------------
Fields set in track names, and in track comments, will be applied to *each*
clip on that track. If a track comment contains the text ``{Dept=Foley}`` for
example, every clip on that track will have a "Foley" value in a "Dept" column.
Likewise, fields set on the session name will apply to all clips in the session.
Fields set in markers, and in marker comments, will be applied to all clips
whose finish is *after* that marker. Fields in markers are applied cumulatively
from breakfast to dinner in the session. The latest marker applying to a clip has
precedence, so if one marker comes after the other, but both define a field, the
value in the later marker.
All markers on all rulers will be scanned for tags. All markers on tracks will
be ignored.
An important note here is that, always, fields set on the clip name have the
highest precedence. If a field is set in a clip name, the same field set on the
track, the value set on the clip will prevail.
.. _tag-range:
Apply Fields to a Time Range of Clips
-------------------------------------
A clip name beginning with ``@`` will not be included in the output, but its
fields will be applied to clips within its time range on lower tracks.
If track 1 has a clip named ``@ {Sc=1- The House}``, any clips beginning within
that range on lower tracks will have a field ``Sc`` with that value.
Combining Clips with Long Names or Many Tags
--------------------------------------------
A clip name beginning with ``&`` will have its parsed clip name appended to the
preceding cue, and the fields of following cues will be applied, earlier clips
having precedence. The clips need not be touching, and the clips will be
combined into a single row of the output. The start time of the first clip will
become the start time of the row, and the finish time of the last clip will
become the finish time of the row.
Setting Document Options
------------------------
.. note::
Document options are not yet implemented.
..
A clip beginning with ``!`` sends a command to `ptulsconv`. These commands can
appear anywhere in the document and apply to the entire document. Commands are
a list of words
The following commands are available:
page $SIZE=`(letter|legal|a4)`
Sets the PDF page size for the output.
font {NAME=`name`} {PATH=`path`}
Sets the primary font for the output.
sub `replacement text` {FOR=`text_to_replace`} {IN=`tag`}
Declares a substitution. Whereever text_to_replace is encountered in the
document it will be replaced with "replacement text".
If `tag` is set, this substitution will only be applied to the values of
that tag.

Binary file not shown.

View File

@@ -1,18 +0,0 @@
.\" Manpage for ptulsconv
.\" Contact https://github.com/iluvcapra/ptulsconv
.TH ptulsconv 1 "15 May 2020" "0.8.2" "ptulsconv man page"
.SH NAME
.BR "ptulsconv" " \- convert
.IR "Avid Pro Tools" " text exports"
.SH SYNOPSIS
ptulsconv [OPTIONS] Export.txt
.SH DESCRIPTION
Convert a Pro Tools text export into ADR reports.
.SH OPTIONS
.IP "-h, --help"
show a help message and exit.
.TP
.RI "--show-available-tags"
Print a list of tags that are interpreted and exit.
.SH AUTHOR
Jamie Hardt (contact at https://github.com/iluvcapra/ptulsconv)

View File

@@ -1,6 +1,3 @@
from ptulsconv.docparser.ptuls_grammar import protools_text_export_grammar """
Parse and convert Pro Tools text exports
__version__ = '1.0.4' """
__author__ = 'Jamie Hardt'
__license__ = 'MIT'
__copyright__ = "%s %s (c) 2022 %s. All rights reserved." % (__name__, __version__, __author__)

View File

@@ -2,23 +2,17 @@ from optparse import OptionParser, OptionGroup
import datetime import datetime
import sys import sys
from ptulsconv import __name__, __version__, __author__,__copyright__ import ptulsconv
from ptulsconv.commands import convert from ptulsconv.commands import convert
from ptulsconv.reporting import print_status_style, print_banner_style, print_section_header_style, print_fatal_error from ptulsconv.reporting import print_status_style, \
print_banner_style, print_section_header_style, \
print_fatal_error
# TODO: Support Top-level modes
# Modes we want:
# - "raw" : Output the parsed text export document with no further processing, as json
# - "tagged"? : Output the parsed result of the TagCompiler
# - "doc" : Generate a full panoply of PDF reports contextually based on tagging
def dump_field_map(output=sys.stdout): def dump_field_map(output=sys.stdout):
from ptulsconv.docparser.tag_mapping import TagMapping from ptulsconv.docparser.tag_mapping import TagMapping
from ptulsconv.docparser.adr_entity import ADRLine, GenericEvent from ptulsconv.docparser.adr_entity import ADRLine, GenericEvent
TagMapping.print_rules(GenericEvent, output=output) TagMapping.print_rules(GenericEvent, output=output)
TagMapping.print_rules(ADRLine, output=output) TagMapping.print_rules(ADRLine, output=output)
@@ -27,18 +21,18 @@ def dump_formats():
print_section_header_style("`raw` format:") print_section_header_style("`raw` format:")
sys.stderr.write("A JSON document of the parsed Pro Tools export.\n") sys.stderr.write("A JSON document of the parsed Pro Tools export.\n")
print_section_header_style("`tagged` Format:") print_section_header_style("`tagged` Format:")
sys.stderr.write("A JSON document containing one record for each clip, with\n" sys.stderr.write(
"all tags parsed and all tagging rules applied. \n") "A JSON document containing one record for each clip, with\n"
"all tags parsed and all tagging rules applied. \n")
print_section_header_style("`doc` format:") print_section_header_style("`doc` format:")
sys.stderr.write("Creates a directory with folders for different types\n" sys.stderr.write("Creates a directory with folders for different types\n"
"of ADR reports.\n\n") "of ADR reports.\n\n")
def main(): def main():
"""Entry point for the command-line invocation""" """Entry point for the command-line invocation"""
parser = OptionParser() parser = OptionParser()
parser.usage = "ptulsconv [options] TEXT_EXPORT.txt" parser.usage = "ptulsconv [options] [TEXT_EXPORT.txt]"
parser.add_option('-f', '--format', parser.add_option('-f', '--format',
dest='output_format', dest='output_format',
@@ -47,44 +41,54 @@ def main():
default='doc', default='doc',
help='Set output format, `raw`, `tagged`, `doc`.') help='Set output format, `raw`, `tagged`, `doc`.')
parser.add_option('-m', '--movie-opts',
dest='movie_opts',
metavar="MOVIE_OPTS",
help="Set movie options")
warn_options = OptionGroup(title="Warning and Validation Options", warn_options = OptionGroup(title="Warning and Validation Options",
parser=parser) parser=parser)
warn_options.add_option('-W', action='store_false', warn_options.add_option('-W', action='store_false',
dest='warnings', dest='warnings',
default=True, default=True,
help='Suppress warnings for common errors (missing code numbers etc.)') help='Suppress warnings for common '
'errors (missing code numbers etc.)')
parser.add_option_group(warn_options) parser.add_option_group(warn_options)
informational_options = OptionGroup(title="Informational Options", informational_options = OptionGroup(title="Informational Options",
parser=parser, parser=parser,
description='Print useful information and exit without processing ' description='Print useful '
'input files.') 'information '
'and exit without processing '
'input files.')
informational_options.add_option('--show-formats', informational_options.add_option(
dest='show_formats', '--show-formats',
action='store_true', dest='show_formats',
default=False, action='store_true',
help='Display helpful information about the ' default=False,
'available output formats.') help='Display helpful information about the available '
'output formats.')
informational_options.add_option('--show-available-tags', informational_options.add_option(
dest='show_tags', '--show-available-tags',
action='store_true', dest='show_tags',
default=False, action='store_true',
help='Display tag mappings for the FMP XML ' default=False,
'output style and exit.') help='Display tag mappings for the FMP XML output style '
'and exit.')
parser.add_option_group(informational_options) parser.add_option_group(informational_options)
print_banner_style(__copyright__) print_banner_style(ptulsconv.__name__)
(options, args) = parser.parse_args(sys.argv) (options, args) = parser.parse_args(sys.argv)
print_section_header_style("Startup") print_section_header_style("Startup")
print_status_style("This run started %s" % (datetime.datetime.now().isoformat())) print_status_style("This run started %s" %
(datetime.datetime.now().isoformat()))
if options.show_tags: if options.show_tags:
dump_field_map() dump_field_map()
@@ -93,15 +97,19 @@ def main():
elif options.show_formats: elif options.show_formats:
dump_formats() dump_formats()
sys.exit(0) sys.exit(0)
if len(args) < 2:
print_fatal_error("Error: No input file")
parser.print_help(sys.stderr)
sys.exit(22)
try: try:
major_mode = options.output_format major_mode = options.output_format
convert(input_file=args[1], major_mode=major_mode, warnings=options.warnings)
if len(args) < 2:
print_status_style(
"No input file provided, will connect to Pro Tools "
"with PTSL...")
convert(major_mode=major_mode,
warnings=options.warnings)
else:
convert(input_file=args[1],
major_mode=major_mode,
warnings=options.warnings)
except FileNotFoundError as e: except FileNotFoundError as e:
print_fatal_error("Error trying to read input file") print_fatal_error("Error trying to read input file")

View File

@@ -1,13 +1,23 @@
from fractions import Fraction """
import re Useful functions for parsing and working with timecode.
"""
import math import math
import re
from collections import namedtuple from collections import namedtuple
from fractions import Fraction
from typing import Optional, SupportsFloat from typing import Optional, SupportsFloat
class TimecodeFormat(namedtuple("_TimecodeFormat", "frame_duration logical_fps drop_frame")):
class TimecodeFormat(namedtuple("_TimecodeFormat",
"frame_duration logical_fps drop_frame")):
"""
A struct reperesenting a timecode datum.
"""
def smpte_to_seconds(self, smpte: str) -> Optional[Fraction]: def smpte_to_seconds(self, smpte: str) -> Optional[Fraction]:
frame_count = smpte_to_frame_count(smpte, self.logical_fps, drop_frame_hint=self.drop_frame) frame_count = smpte_to_frame_count(
smpte, self.logical_fps, drop_frame_hint=self.drop_frame)
if frame_count is None: if frame_count is None:
return None return None
else: else:
@@ -15,29 +25,34 @@ class TimecodeFormat(namedtuple("_TimecodeFormat", "frame_duration logical_fps d
def seconds_to_smpte(self, seconds: SupportsFloat) -> str: def seconds_to_smpte(self, seconds: SupportsFloat) -> str:
frame_count = int(seconds / self.frame_duration) frame_count = int(seconds / self.frame_duration)
return frame_count_to_smpte(frame_count, self.logical_fps, self.drop_frame) return frame_count_to_smpte(frame_count, self.logical_fps,
self.drop_frame)
def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int, drop_frame_hint=False) -> Optional[int]: def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int,
drop_frame_hint=False) -> Optional[int]:
""" """
Convert a string with a SMPTE timecode representation into a frame count. Convert a string with a SMPTE timecode representation into a frame count.
:param smpte_rep_string: The timecode string :param smpte_rep_string: The timecode string
:param frames_per_logical_second: Num of frames in a logical second. This is asserted to be :param frames_per_logical_second: Num of frames in a logical second. This
in one of `[24,25,30,48,50,60]` is asserted to be in one of `[24,25,30,48,50,60]`
:param drop_frame_hint: `True` if the timecode rep is drop frame. This is ignored (and implied `True`) if :param drop_frame_hint: `True` if the timecode rep is drop frame. This is
the last separator in the timecode string is a semicolon. This is ignored (and implied `False`) if ignored (and implied `True`) if the last separator in the timecode
`frames_per_logical_second` is not 30 or 60. string is a semicolon. This is ignored (and implied `False`) if
`frames_per_logical_second` is not 30 or 60.
""" """
assert frames_per_logical_second in [24, 25, 30, 48, 50, 60] assert frames_per_logical_second in [24, 25, 30, 48, 50, 60]
m = re.search(r'(\d?\d)[:;](\d\d)[:;](\d\d)([:;])(\d\d)(\.\d+)?', smpte_rep_string) m = re.search(
r'(\d?\d)[:;](\d\d)[:;](\d\d)([:;])(\d\d)(\.\d+)?', smpte_rep_string)
if m is None: if m is None:
return None return None
hh, mm, ss, sep, ff, frac = m.groups() hh, mm, ss, sep, ff, frac = m.groups()
hh, mm, ss, ff, frac = int(hh), int(mm), int(ss), int(ff), float(frac or 0.0) hh, mm, ss, ff, frac = int(hh), int(
mm), int(ss), int(ff), float(frac or 0.0)
drop_frame = drop_frame_hint drop_frame = drop_frame_hint
if sep == ";": if sep == ";":
@@ -46,8 +61,8 @@ def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int,
if frames_per_logical_second not in [30, 60]: if frames_per_logical_second not in [30, 60]:
drop_frame = False drop_frame = False
raw_frames = hh * 3600 * frames_per_logical_second + mm * 60 * frames_per_logical_second + \ raw_frames = hh * 3600 * frames_per_logical_second + mm * 60 * \
ss * frames_per_logical_second + ff frames_per_logical_second + ss * frames_per_logical_second + ff
frames = raw_frames frames = raw_frames
if drop_frame is True: if drop_frame is True:
@@ -60,7 +75,8 @@ def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int,
return frames return frames
def frame_count_to_smpte(frame_count: int, frames_per_logical_second: int, drop_frame: bool = False, def frame_count_to_smpte(frame_count: int, frames_per_logical_second: int,
drop_frame: bool = False,
fractional_frame: Optional[float] = None) -> str: fractional_frame: Optional[float] = None) -> str:
assert frames_per_logical_second in [24, 25, 30, 48, 50, 60] assert frames_per_logical_second in [24, 25, 30, 48, 50, 60]
assert fractional_frame is None or fractional_frame < 1.0 assert fractional_frame is None or fractional_frame < 1.0
@@ -82,7 +98,8 @@ def frame_count_to_smpte(frame_count: int, frames_per_logical_second: int, drop_
hh = hh % 24 hh = hh % 24
if fractional_frame is not None and fractional_frame > 0: if fractional_frame is not None and fractional_frame > 0:
return "%02i:%02i:%02i%s%02i%s" % (hh, mm, ss, separator, ff, ("%.3f" % fractional_frame)[1:]) return "%02i:%02i:%02i%s%02i%s" % (hh, mm, ss, separator, ff,
("%.3f" % fractional_frame)[1:])
else: else:
return "%02i:%02i:%02i%s%02i" % (hh, mm, ss, separator, ff) return "%02i:%02i:%02i%s%02i" % (hh, mm, ss, separator, ff)

View File

@@ -1,20 +1,27 @@
"""
This module provides the main input document parsing and transform
implementation.
"""
import datetime import datetime
import os import os
import sys import sys
from itertools import chain from itertools import chain
import csv import csv
from typing import List from typing import List, Optional, Iterator
from fractions import Fraction from fractions import Fraction
from .docparser.adr_entity import make_entities import ptsl
from .reporting import print_section_header_style, print_status_style, print_warning
from .validations import * from .docparser.adr_entity import make_entities, ADRLine
from .reporting import print_section_header_style, print_status_style, \
print_warning
from .validations import validate_unique_field, validate_non_empty_field, \
validate_dependent_value
from ptulsconv.docparser import parse_document from ptulsconv.docparser import parse_document
from ptulsconv.docparser.tag_compiler import TagCompiler from ptulsconv.docparser.tag_compiler import TagCompiler
from ptulsconv.broadcast_timecode import TimecodeFormat from ptulsconv.broadcast_timecode import TimecodeFormat
from fractions import Fraction
from ptulsconv.pdf.supervisor_1pg import output_report as output_supervisor_1pg from ptulsconv.pdf.supervisor_1pg import output_report as output_supervisor_1pg
from ptulsconv.pdf.line_count import output_report as output_line_count from ptulsconv.pdf.line_count import output_report as output_line_count
@@ -25,10 +32,17 @@ from ptulsconv.pdf.continuity import output_report as output_continuity
from json import JSONEncoder from json import JSONEncoder
class MyEncoder(JSONEncoder): class FractionEncoder(JSONEncoder):
"""
A subclass of :class:`JSONEncoder` which encodes :class:`Fraction` objects
as a dict.
"""
force_denominator: Optional[int] force_denominator: Optional[int]
def default(self, o): def default(self, o):
"""
"""
if isinstance(o, Fraction): if isinstance(o, Fraction):
return dict(numerator=o.numerator, denominator=o.denominator) return dict(numerator=o.numerator, denominator=o.denominator)
else: else:
@@ -36,6 +50,11 @@ class MyEncoder(JSONEncoder):
def output_adr_csv(lines: List[ADRLine], time_format: TimecodeFormat): def output_adr_csv(lines: List[ADRLine], time_format: TimecodeFormat):
"""
Writes ADR lines as CSV to the current working directory. Creates
directories for each character number and name pair, and within that
directory, creates a CSV file for each reel.
"""
reels = set([ln.reel for ln in lines]) reels = set([ln.reel for ln in lines])
for n, name in [(n.character_id, n.character_name) for n in lines]: for n, name in [(n.character_id, n.character_name) for n in lines]:
@@ -43,12 +62,15 @@ def output_adr_csv(lines: List[ADRLine], time_format: TimecodeFormat):
os.makedirs(dir_name, exist_ok=True) os.makedirs(dir_name, exist_ok=True)
os.chdir(dir_name) os.chdir(dir_name)
for reel in reels: for reel in reels:
these_lines = [ln for ln in lines if ln.character_id == n and ln.reel == reel] these_lines = [ln for ln in lines
if ln.character_id == n and ln.reel == reel]
if len(these_lines) == 0: if len(these_lines) == 0:
continue continue
outfile_name = "%s_%s_%s_%s.csv" % (these_lines[0].title, n, these_lines[0].character_name, reel,) outfile_name = "%s_%s_%s_%s.csv" % (these_lines[0].title, n,
these_lines[0].character_name,
reel,)
with open(outfile_name, mode='w', newline='') as outfile: with open(outfile_name, mode='w', newline='') as outfile:
writer = csv.writer(outfile, dialect='excel') writer = csv.writer(outfile, dialect='excel')
@@ -62,25 +84,54 @@ def output_adr_csv(lines: List[ADRLine], time_format: TimecodeFormat):
for event in these_lines: for event in these_lines:
this_start = event.start or 0 this_start = event.start or 0
this_finish = event.finish or 0 this_finish = event.finish or 0
this_row = [event.title, event.character_name, event.cue_number, this_row = [event.title, event.character_name,
event.reel, event.version, event.cue_number, event.reel, event.version,
time_format.seconds_to_smpte(this_start), time_format.seconds_to_smpte(this_finish), time_format.seconds_to_smpte(this_start),
time_format.seconds_to_smpte(this_finish),
float(this_start), float(this_finish), float(this_start), float(this_finish),
event.prompt, event.prompt,
event.reason, event.note, "TV" if event.tv else ""] event.reason, event.note, "TV"
if event.tv else ""]
writer.writerow(this_row) writer.writerow(this_row)
os.chdir("..") os.chdir("..")
#
# def output_avid_markers(lines): def generate_documents(session_tc_format, scenes, adr_lines: List[ADRLine],
# reels = set([ln['Reel'] for ln in lines if 'Reel' in ln.keys()]) title):
# """
# for reel in reels: Create PDF output.
# pass """
print_section_header_style("Creating PDF Reports")
report_date = datetime.datetime.now()
reports_dir = "%s_%s" % (title, report_date.strftime("%Y-%m-%d_%H%M%S"))
os.makedirs(reports_dir, exist_ok=False)
os.chdir(reports_dir)
client = next((x.client for x in adr_lines), "")
supervisor = next((x.supervisor for x in adr_lines), "")
output_continuity(scenes=scenes, tc_display_format=session_tc_format,
title=title, client=client or "",
supervisor=supervisor)
reels = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6']
if len(adr_lines) == 0:
print_status_style("No ADR lines were found in the input document. "
"ADR reports will not be generated.")
else:
create_adr_reports(adr_lines, tc_display_format=session_tc_format,
reel_list=sorted(reels))
def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat, reel_list): def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat,
reel_list: List[str]):
"""
Creates a directory heirarchy and a respective set of ADR reports,
given a list of lines.
"""
print_status_style("Creating ADR Report") print_status_style("Creating ADR Report")
output_summary(lines, tc_display_format=tc_display_format) output_summary(lines, tc_display_format=tc_display_format)
@@ -97,7 +148,8 @@ def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat,
print_status_style("Creating Director's Logs director and reports") print_status_style("Creating Director's Logs director and reports")
os.makedirs("Director Logs", exist_ok=True) os.makedirs("Director Logs", exist_ok=True)
os.chdir("Director Logs") os.chdir("Director Logs")
output_summary(lines, tc_display_format=tc_display_format, by_character=True) output_summary(lines, tc_display_format=tc_display_format,
by_character=True)
os.chdir("..") os.chdir("..")
print_status_style("Creating CSV outputs") print_status_style("Creating CSV outputs")
@@ -106,36 +158,42 @@ def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat,
output_adr_csv(lines, time_format=tc_display_format) output_adr_csv(lines, time_format=tc_display_format)
os.chdir("..") os.chdir("..")
# print_status_style("Creating Avid Marker XML files")
# os.makedirs("Avid Markers", exist_ok=True)
# os.chdir("Avid Markers")
# output_avid_markers(lines)
# os.chdir("..")
print_status_style("Creating Scripts directory and reports") print_status_style("Creating Scripts directory and reports")
os.makedirs("Talent Scripts", exist_ok=True) os.makedirs("Talent Scripts", exist_ok=True)
os.chdir("Talent Scripts") os.chdir("Talent Scripts")
output_talent_sides(lines, tc_display_format=tc_display_format) output_talent_sides(lines, tc_display_format=tc_display_format)
# def parse_text_export(file): def convert(major_mode, input_file=None, output=sys.stdout, warnings=True):
# ast = ptulsconv.protools_text_export_grammar.parse(file.read()) """
# dict_parser = ptulsconv.DictionaryParserVisitor() Primary worker function, accepts the input file and decides
# parsed = dict_parser.visit(ast) what to do with it based on the `major_mode`.
# print_status_style('Session title: %s' % parsed['header']['session_name'])
# print_status_style('Session timecode format: %f' % parsed['header']['timecode_format'])
# print_status_style('Fount %i tracks' % len(parsed['tracks']))
# print_status_style('Found %i markers' % len(parsed['markers']))
# return parsed
:param input_file: a path to the input file.
:param major_mode: the selected output mode, 'raw', 'tagged' or 'doc'.
"""
session_text = ""
if input_file is not None:
with open(input_file, "r") as file:
session_text = file.read()
else:
with ptsl.open_engine(
company_name="The ptulsconv developers",
application_name="ptulsconv") as engine:
req = engine.export_session_as_text()
req.utf8_encoding()
req.include_track_edls()
req.include_markers()
req.time_type("tc")
req.dont_show_crossfades()
req.selected_tracks_only()
session_text = req.export_string()
def convert(input_file, major_mode, output=sys.stdout, warnings=True): session = parse_document(session_text)
session = parse_document(input_file)
session_tc_format = session.header.timecode_format session_tc_format = session.header.timecode_format
if major_mode == 'raw': if major_mode == 'raw':
output.write(MyEncoder().encode(session)) output.write(FractionEncoder().encode(session))
else: else:
compiler = TagCompiler() compiler = TagCompiler()
@@ -143,57 +201,55 @@ def convert(input_file, major_mode, output=sys.stdout, warnings=True):
compiled_events = list(compiler.compile_events()) compiled_events = list(compiler.compile_events())
if major_mode == 'tagged': if major_mode == 'tagged':
output.write(MyEncoder().encode(compiled_events)) output.write(FractionEncoder().encode(compiled_events))
else: elif major_mode == 'doc':
generic_events, adr_lines = make_entities(compiled_events) generic_events, adr_lines = make_entities(compiled_events)
scenes = sorted([s for s in compiler.compile_all_time_spans()
if s[0] == 'Sc'],
key=lambda x: x[2])
# TODO: Breakdown by titles # TODO: Breakdown by titles
titles = set([x.title for x in (generic_events + adr_lines)]) titles = set([x.title for x in (generic_events + adr_lines)])
assert len(titles) == 1, "Multiple titles per export is not supported" if len(titles) != 1:
print_warning("Multiple titles per export is not supported, "
"found multiple titles: %s Exiting." % titles)
exit(-1)
print(titles) title = list(titles)[0]
print_status_style(
"%i generic events found." % len(generic_events)
)
print_status_style("%i ADR events found." % len(adr_lines))
if warnings: if warnings:
perform_adr_validations(adr_lines) perform_adr_validations(iter(adr_lines))
if major_mode == 'doc': generate_documents(session_tc_format, scenes, adr_lines,
print_section_header_style("Creating PDF Reports") title)
report_date = datetime.datetime.now()
reports_dir = "%s_%s" % (list(titles)[0], report_date.strftime("%Y-%m-%d_%H%M%S"))
os.makedirs(reports_dir, exist_ok=False)
os.chdir(reports_dir)
scenes = sorted([s for s in compiler.compile_all_time_spans() if s[0] == 'Sc'],
key=lambda x: x[2])
output_continuity(scenes=scenes, tc_display_format=session_tc_format,
title=list(titles)[0], client="", supervisor="")
# reels = sorted([r for r in compiler.compile_all_time_spans() if r[0] == 'Reel'],
# key=lambda x: x[2])
reels = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6']
create_adr_reports(adr_lines,
tc_display_format=session_tc_format,
reel_list=sorted(reels))
def perform_adr_validations(lines): def perform_adr_validations(lines: Iterator[ADRLine]):
for warning in chain(validate_unique_field(lines, """
field='cue_number', Performs validations on the input.
scope='title'), """
validate_non_empty_field(lines, for warning in chain(
field='cue_number'), validate_unique_field(lines,
validate_non_empty_field(lines, field='cue_number',
field='character_id'), scope='title'),
validate_non_empty_field(lines, validate_non_empty_field(lines,
field='title'), field='cue_number'),
validate_dependent_value(lines, validate_non_empty_field(lines,
key_field='character_id', field='character_id'),
dependent_field='character_name'), validate_non_empty_field(lines,
validate_dependent_value(lines, field='title'),
key_field='character_id', validate_dependent_value(lines,
dependent_field='actor_name')): key_field='character_id',
dependent_field='character_name'),
validate_dependent_value(lines,
key_field='character_id',
dependent_field='actor_name')):
print_warning(warning.report_message()) print_warning(warning.report_message())

View File

@@ -1 +1,5 @@
from .doc_parser_visitor import parse_document """
Docparser module
"""
from .pt_doc_parser import parse_document

View File

@@ -1,12 +1,27 @@
"""
This module defines classes and methods for converting :class:`Event` objects
into :class:`ADRLine` objects.
"""
from ptulsconv.docparser.tag_compiler import Event from ptulsconv.docparser.tag_compiler import Event
from typing import Optional, List, Tuple from typing import Optional, List, Tuple
from dataclasses import dataclass from dataclasses import dataclass
from fractions import Fraction from fractions import Fraction
from ptulsconv.docparser.tag_mapping import TagMapping from ptulsconv.docparser.tag_mapping import TagMapping
def make_entities(from_events: List[Event]) -> Tuple[List['GenericEvent'], List['ADRLine']]: def make_entities(from_events: List[Event]) -> Tuple[List['GenericEvent'],
List['ADRLine']]:
"""
Accepts a list of Events and converts them into either ADRLine events or
GenricEvents by calling :func:`make_entity` on each member.
:param from_events: A list of `Event` objects.
:returns: A tuple of two lists, the first containing :class:`GenericEvent`
and the second containing :class:`ADRLine`.
"""
generic_events = list() generic_events = list()
adr_lines = list() adr_lines = list()
@@ -21,6 +36,14 @@ def make_entities(from_events: List[Event]) -> Tuple[List['GenericEvent'], List[
def make_entity(from_event: Event) -> Optional[object]: def make_entity(from_event: Event) -> Optional[object]:
"""
Accepts an event and creates either an :class:`ADRLine` or a
:class:`GenericEvent`. An event is an "ADRLine" if it has a cue number/"QN"
tag field.
:param from_event: An :class:`Event`.
"""
instance = GenericEvent instance = GenericEvent
tag_map = GenericEvent.tag_mapping tag_map = GenericEvent.tag_mapping
if 'QN' in from_event.tags.keys(): if 'QN' in from_event.tags.keys():
@@ -45,14 +68,15 @@ class GenericEvent:
scene: Optional[str] = None scene: Optional[str] = None
version: Optional[str] = None version: Optional[str] = None
reel: Optional[str] = None reel: Optional[str] = None
start: Fraction = Fraction(0,1) start: Fraction = Fraction(0, 1)
finish: Fraction = Fraction(0,1) finish: Fraction = Fraction(0, 1)
omitted: bool = False omitted: bool = False
note: Optional[str] = None note: Optional[str] = None
requested_by: Optional[str] = None requested_by: Optional[str] = None
tag_mapping = [ tag_mapping = [
TagMapping(source='Title', target="title", alt=TagMapping.ContentSource.Session), TagMapping(source='Title', target="title",
alt=TagMapping.ContentSource.Session),
TagMapping(source="Supv", target="supervisor"), TagMapping(source="Supv", target="supervisor"),
TagMapping(source="Client", target="client"), TagMapping(source="Client", target="client"),
TagMapping(source="Sc", target="scene"), TagMapping(source="Sc", target="scene"),
@@ -67,6 +91,7 @@ class GenericEvent:
@dataclass @dataclass
class ADRLine(GenericEvent): class ADRLine(GenericEvent):
priority: Optional[int] = None priority: Optional[int] = None
cue_number: Optional[str] = None cue_number: Optional[str] = None
character_id: Optional[str] = None character_id: Optional[str] = None
@@ -88,9 +113,11 @@ class ADRLine(GenericEvent):
TagMapping(source="P", target="priority"), TagMapping(source="P", target="priority"),
TagMapping(source="QN", target="cue_number"), TagMapping(source="QN", target="cue_number"),
TagMapping(source="CN", target="character_id"), TagMapping(source="CN", target="character_id"),
TagMapping(source="Char", target="character_name", alt=TagMapping.ContentSource.Track), TagMapping(source="Char", target="character_name",
alt=TagMapping.ContentSource.Track),
TagMapping(source="Actor", target="actor_name"), TagMapping(source="Actor", target="actor_name"),
TagMapping(source="Line", target="prompt", alt=TagMapping.ContentSource.Clip), TagMapping(source="Line", target="prompt",
alt=TagMapping.ContentSource.Clip),
TagMapping(source="R", target="reason"), TagMapping(source="R", target="reason"),
TagMapping(source="Mins", target="time_budget_mins", TagMapping(source="Mins", target="time_budget_mins",
formatter=(lambda n: float(n))), formatter=(lambda n: float(n))),
@@ -108,31 +135,3 @@ class ADRLine(GenericEvent):
TagMapping(source="OPT", target="optional", TagMapping(source="OPT", target="optional",
formatter=(lambda x: len(x) > 0)) formatter=(lambda x: len(x) > 0))
] ]
# def __init__(self):
# self.title = None
# self.supervisor = None
# self.client = None
# self.scene = None
# self.version = None
# self.reel = None
# self.start = None
# self.finish = None
# self.priority = None
# self.cue_number = None
# self.character_id = None
# self.character_name = None
# self.actor_name = None
# self.prompt = None
# self.reason = None
# self.requested_by = None
# self.time_budget_mins = None
# self.note = None
# self.spot = None
# self.shot = None
# self.effort = False
# self.tv = False
# self.tbw = False
# self.omitted = False
# self.adlib = False
# self.optional = False

View File

@@ -19,21 +19,41 @@ class SessionDescriptor:
self.tracks = kwargs['tracks'] self.tracks = kwargs['tracks']
self.markers = kwargs['markers'] self.markers = kwargs['markers']
def markers_timed(self) -> Iterator[Tuple['MarkerDescriptor', Fraction]]: def markers_timed(self,
only_ruler_markers: bool = True) -> \
Iterator[Tuple['MarkerDescriptor', Fraction]]:
"""
Iterate each marker in the session with its respective time reference.
"""
for marker in self.markers: for marker in self.markers:
marker_time = Fraction(marker.time_reference, int(self.header.sample_rate))
#marker_time = self.header.convert_timecode(marker.location) if marker.track_marker and only_ruler_markers:
continue
marker_time = Fraction(marker.time_reference,
int(self.header.sample_rate))
# marker_time = self.header.convert_timecode(marker.location)
yield marker, marker_time yield marker, marker_time
def tracks_clips(self) -> Iterator[Tuple['TrackDescriptor', 'TrackClipDescriptor']]: def tracks_clips(self) -> Iterator[Tuple['TrackDescriptor',
'TrackClipDescriptor']]:
"""
Iterate each track clip with its respective owning clip.
"""
for track in self.tracks: for track in self.tracks:
for clip in track.clips: for clip in track.clips:
yield track, clip yield track, clip
def track_clips_timed(self) -> Iterator[Tuple["TrackDescriptor", "TrackClipDescriptor", def track_clips_timed(self) -> Iterator[Tuple["TrackDescriptor",
Fraction, Fraction, Fraction]]: "TrackClipDescriptor",
Fraction, Fraction, Fraction]
]:
""" """
:return: A Generator that yields track, clip, start time, finish time, and timestamp Iterate each track clip with its respective owning clip and timing
information.
:returns: A Generator that yields track, clip, start time, finish time,
and timestamp
""" """
for track, clip in self.tracks_clips(): for track, clip in self.tracks_clips():
start_time = self.header.convert_timecode(clip.start_timecode) start_time = self.header.convert_timecode(clip.start_timecode)
@@ -105,10 +125,12 @@ class HeaderDescriptor:
if self.timecode_fps in frame_rates.keys(): if self.timecode_fps in frame_rates.keys():
return frame_rates[self.timecode_fps] return frame_rates[self.timecode_fps]
else: else:
raise ValueError("Unrecognized TC rate (%s)" % self.timecode_format) raise ValueError("Unrecognized TC rate (%s)" %
self.timecode_format)
class TrackDescriptor: class TrackDescriptor:
index: int
name: str name: str
comments: str comments: str
user_delay_samples: int user_delay_samples: int
@@ -117,6 +139,7 @@ class TrackDescriptor:
clips: List["TrackClipDescriptor"] clips: List["TrackClipDescriptor"]
def __init__(self, **kwargs): def __init__(self, **kwargs):
self.index = kwargs['index']
self.name = kwargs['name'] self.name = kwargs['name']
self.comments = kwargs['comments'] self.comments = kwargs['comments']
self.user_delay_samples = kwargs['user_delay_samples'] self.user_delay_samples = kwargs['user_delay_samples']
@@ -165,6 +188,7 @@ class MarkerDescriptor:
units: str units: str
name: str name: str
comments: str comments: str
track_marker: bool
def __init__(self, **kwargs): def __init__(self, **kwargs):
self.number = kwargs['number'] self.number = kwargs['number']
@@ -173,3 +197,4 @@ class MarkerDescriptor:
self.units = kwargs['units'] self.units = kwargs['units']
self.name = kwargs['name'] self.name = kwargs['name']
self.comments = kwargs['comments'] self.comments = kwargs['comments']
self.track_marker = kwargs['track_marker']

View File

@@ -1,172 +0,0 @@
from parsimonious.nodes import NodeVisitor
from .doc_entity import SessionDescriptor, HeaderDescriptor, TrackDescriptor, FileDescriptor, \
TrackClipDescriptor, ClipDescriptor, PluginDescriptor, MarkerDescriptor
def parse_document(path: str) -> SessionDescriptor:
"""
Parse a Pro Tools text export.
:param path: path to a file
:return: the session descriptor
"""
from .ptuls_grammar import protools_text_export_grammar
with open(path, 'r') as f:
ast = protools_text_export_grammar.parse(f.read())
return DocParserVisitor().visit(ast)
class DocParserVisitor(NodeVisitor):
@staticmethod
def visit_document(_, visited_children) -> SessionDescriptor:
files = next(iter(visited_children[1]), None)
clips = next(iter(visited_children[2]), None)
plugins = next(iter(visited_children[3]), None)
tracks = next(iter(visited_children[4]), None)
markers = next(iter(visited_children[5]), None)
return SessionDescriptor(header=visited_children[0],
files=files,
clips=clips,
plugins=plugins,
tracks=tracks,
markers=markers)
@staticmethod
def visit_header(_, visited_children):
tc_drop = False
for _ in visited_children[20]:
tc_drop = True
return HeaderDescriptor(session_name=visited_children[2],
sample_rate=visited_children[6],
bit_depth=visited_children[10],
start_timecode=visited_children[15],
timecode_format=visited_children[19],
timecode_drop_frame=tc_drop,
count_audio_tracks=visited_children[25],
count_clips=visited_children[29],
count_files=visited_children[33])
@staticmethod
def visit_files_section(_, visited_children):
return list(map(lambda child: FileDescriptor(filename=child[0], path=child[2]), visited_children[2]))
@staticmethod
def visit_clips_section(_, visited_children):
channel = next(iter(visited_children[2][3]), 1)
return list(map(lambda child: ClipDescriptor(clip_name=child[0], file=child[2], channel=channel),
visited_children[2]))
@staticmethod
def visit_plugin_listing(_, visited_children):
return list(map(lambda child: PluginDescriptor(manufacturer=child[0],
plugin_name=child[2],
version=child[4],
format=child[6],
stems=child[8],
count_instances=child[10]),
visited_children[2]))
@staticmethod
def visit_track_block(_, visited_children):
track_header, track_clip_list = visited_children
clips = []
for clip in track_clip_list:
if clip[0] is not None:
clips.append(clip[0])
plugins = []
for plugin_opt in track_header[16]:
for plugin in plugin_opt[1]:
plugins.append(plugin[1])
return TrackDescriptor(
name=track_header[2],
comments=track_header[6],
user_delay_samples=track_header[10],
state=track_header[14],
plugins=plugins,
clips=clips
)
@staticmethod
def visit_frame_rate(node, _):
return node.text
@staticmethod
def visit_track_listing(_, visited_children):
return visited_children[1]
@staticmethod
def visit_track_clip_entry(_, visited_children):
timestamp = None
if isinstance(visited_children[14], list):
timestamp = visited_children[14][0][0]
return TrackClipDescriptor(channel=visited_children[0],
event=visited_children[3],
clip_name=visited_children[6],
start_time=visited_children[8],
finish_time=visited_children[10],
duration=visited_children[12],
timestamp=timestamp,
state=visited_children[15])
@staticmethod
def visit_track_state_list(_, visited_children):
states = []
for next_state in visited_children:
states.append(next_state[0][0].text)
return states
@staticmethod
def visit_track_clip_state(node, _):
return node.text
@staticmethod
def visit_markers_listing(_, visited_children):
markers = []
for marker in visited_children[2]:
markers.append(marker)
return markers
@staticmethod
def visit_marker_record(_, visited_children):
return MarkerDescriptor(number=visited_children[0],
location=visited_children[3],
time_reference=visited_children[5],
units=visited_children[8],
name=visited_children[10],
comments=visited_children[12])
@staticmethod
def visit_formatted_clip_name(_, visited_children):
return visited_children[1].text
@staticmethod
def visit_string_value(node, _):
return node.text.strip(" ")
@staticmethod
def visit_integer_value(node, _):
return int(node.text)
# def visit_timecode_value(self, node, visited_children):
# return node.text.strip(" ")
@staticmethod
def visit_float_value(node, _):
return float(node.text)
def visit_block_ending(self, node, visited_children):
pass
def generic_visit(self, node, visited_children):
""" The generic visit method. """
return visited_children or node

View File

@@ -1 +1 @@
from dataclasses import dataclass # from dataclasses import dataclass

View File

@@ -0,0 +1,307 @@
from parsimonious.nodes import NodeVisitor
from parsimonious.grammar import Grammar
from .doc_entity import SessionDescriptor, HeaderDescriptor, TrackDescriptor, \
FileDescriptor, TrackClipDescriptor, ClipDescriptor, PluginDescriptor, \
MarkerDescriptor
protools_text_export_grammar = Grammar(
r"""
document = header files_section? clips_section? plugin_listing?
track_listing? markers_block?
header = "SESSION NAME:" fs string_value rs
"SAMPLE RATE:" fs float_value rs
"BIT DEPTH:" fs integer_value "-bit" rs
"SESSION START TIMECODE:" fs string_value rs
"TIMECODE FORMAT:" fs frame_rate " Drop"? " Frame" rs
"# OF AUDIO TRACKS:" fs integer_value rs
"# OF AUDIO CLIPS:" fs integer_value rs
"# OF AUDIO FILES:" fs integer_value rs block_ending
frame_rate = ("60" / "59.94" / "30" / "29.97" / "25" / "24" /
"23.976")
files_section = files_header files_column_header file_record*
block_ending
files_header = "F I L E S I N S E S S I O N" rs
files_column_header = "Filename" isp fs "Location" rs
file_record = string_value fs string_value rs
clips_section = clips_header clips_column_header clip_record*
block_ending
clips_header = "O N L I N E C L I P S I N S E S S I O N" rs
clips_column_header = string_value fs string_value rs
clip_record = string_value fs string_value
(fs "[" integer_value "]")? rs
plugin_listing = plugin_header plugin_column_header plugin_record*
block_ending
plugin_header = "P L U G - I N S L I S T I N G" rs
plugin_column_header = "MANUFACTURER " fs
"PLUG-IN NAME " fs
"VERSION " fs
"FORMAT " fs
"STEMS " fs
"NUMBER OF INSTANCES" rs
plugin_record = string_value fs string_value fs string_value fs
string_value fs string_value fs string_value rs
track_listing = track_listing_header track_block*
track_block = track_list_top ( track_clip_entry / block_ending )*
track_listing_header = "T R A C K L I S T I N G" rs
track_list_top = "TRACK NAME:" fs string_value rs
"COMMENTS:" fs string_value rs
"USER DELAY:" fs integer_value " Samples" rs
"STATE: " track_state_list rs
("PLUG-INS: " ( fs string_value )* rs)?
"CHANNEL " fs "EVENT " fs
"CLIP NAME " fs
"START TIME " fs "END TIME " fs
"DURATION " fs
("TIMESTAMP " fs)? "STATE" rs
track_state_list = (track_state " ")*
track_state = "Solo" / "Muted" / "Inactive" / "Hidden"
track_clip_entry = integer_value isp fs
integer_value isp fs
string_value fs
string_value fs string_value fs string_value fs
(string_value fs)?
track_clip_state rs
track_clip_state = ("Muted" / "Unmuted")
markers_block = markers_block_header
(markers_list / markers_list_simple)
markers_list_simple = markers_column_header_simple marker_record_simple*
markers_list = markers_column_header marker_record*
markers_block_header = "M A R K E R S L I S T I N G" rs
markers_column_header_simple =
"# LOCATION TIME REFERENCE "
"UNITS NAME "
"COMMENTS" rs
markers_column_header =
"# LOCATION TIME REFERENCE "
"UNITS NAME "
"TRACK NAME "
"TRACK TYPE COMMENTS" rs
marker_record_simple = integer_value isp fs string_value fs
integer_value isp fs string_value fs string_value
fs string_value rs
marker_record = integer_value isp fs string_value fs integer_value isp fs
string_value fs string_value fs string_value fs
string_value fs string_value rs
fs = "\t"
rs = "\n"
block_ending = rs rs
string_value = ~r"[^\t\n]*"
integer_value = ~r"\d+"
float_value = ~r"\d+(\.\d+)?"
isp = ~r"[^\d\t\n]*"
""")
def parse_document(session_text: str) -> SessionDescriptor:
"""
Parse a Pro Tools text export.
:param session_text: Pro Tools session text export
:return: the session descriptor
"""
ast = protools_text_export_grammar.parse(session_text)
return DocParserVisitor().visit(ast)
class DocParserVisitor(NodeVisitor):
def __init__(self):
self.track_index = 0
# @staticmethod
def visit_document(self, _, visited_children) -> SessionDescriptor:
self.track_index = 0
files = next(iter(visited_children[1]), None)
clips = next(iter(visited_children[2]), None)
plugins = next(iter(visited_children[3]), None)
tracks = next(iter(visited_children[4]), None)
markers = next(iter(visited_children[5]), None)
return SessionDescriptor(header=visited_children[0],
files=files,
clips=clips,
plugins=plugins,
tracks=tracks,
markers=markers)
@staticmethod
def visit_header(_, visited_children):
tc_drop = False
for _ in visited_children[20]:
tc_drop = True
return HeaderDescriptor(session_name=visited_children[2],
sample_rate=visited_children[6],
bit_depth=visited_children[10],
start_timecode=visited_children[15],
timecode_format=visited_children[19],
timecode_drop_frame=tc_drop,
count_audio_tracks=visited_children[25],
count_clips=visited_children[29],
count_files=visited_children[33])
@staticmethod
def visit_files_section(_, visited_children):
return list(map(
lambda child: FileDescriptor(filename=child[0], path=child[2]),
visited_children[2]))
@staticmethod
def visit_clips_section(_, visited_children):
channel = next(iter(visited_children[2][3]), 1)
return list(map(
lambda child: ClipDescriptor(clip_name=child[0], file=child[2],
channel=channel),
visited_children[2]))
@staticmethod
def visit_plugin_listing(_, visited_children):
return list(map(lambda child:
PluginDescriptor(manufacturer=child[0],
plugin_name=child[2],
version=child[4],
format=child[6],
stems=child[8],
count_instances=child[10]),
visited_children[2]))
# @staticmethod
def visit_track_block(self, _, visited_children):
track_header, track_clip_list = visited_children
clips = []
for clip in track_clip_list:
if clip[0] is not None:
clips.append(clip[0])
plugins = []
for plugin_opt in track_header[16]:
for plugin in plugin_opt[1]:
plugins.append(plugin[1])
this_index = self.track_index
self.track_index += 1
return TrackDescriptor(
index=this_index,
name=track_header[2],
comments=track_header[6],
user_delay_samples=track_header[10],
state=track_header[14],
plugins=plugins,
clips=clips
)
@staticmethod
def visit_frame_rate(node, _):
return node.text
@staticmethod
def visit_track_listing(_, visited_children):
return visited_children[1]
@staticmethod
def visit_track_clip_entry(_, visited_children):
timestamp = None
if isinstance(visited_children[14], list):
timestamp = visited_children[14][0][0]
return TrackClipDescriptor(channel=visited_children[0],
event=visited_children[3],
clip_name=visited_children[6],
start_time=visited_children[8],
finish_time=visited_children[10],
duration=visited_children[12],
timestamp=timestamp,
state=visited_children[15])
@staticmethod
def visit_track_state_list(_, visited_children):
states = []
for next_state in visited_children:
states.append(next_state[0][0].text)
return states
@staticmethod
def visit_track_clip_state(node, _):
return node.text
@staticmethod
def visit_markers_block(_, visited_children):
markers = []
for marker in visited_children[1][0][1]:
markers.append(marker)
return markers
@staticmethod
def visit_marker_record_simple(_, visited_children):
return MarkerDescriptor(number=visited_children[0],
location=visited_children[3],
time_reference=visited_children[5],
units=visited_children[8],
name=visited_children[10],
comments=visited_children[12],
track_marker=False)
@staticmethod
def visit_marker_record(_, visited_children):
track_type = visited_children[15]
is_track_marker = (track_type == "Track")
return MarkerDescriptor(number=visited_children[0],
location=visited_children[3],
time_reference=visited_children[5],
units=visited_children[8],
name=visited_children[10],
comments=visited_children[16],
track_marker=is_track_marker)
@staticmethod
def visit_formatted_clip_name(_, visited_children):
return visited_children[1].text
@staticmethod
def visit_string_value(node, _):
return node.text.strip(" ")
@staticmethod
def visit_integer_value(node, _):
return int(node.text)
# def visit_timecode_value(self, node, visited_children):
# return node.text.strip(" ")
@staticmethod
def visit_float_value(node, _):
return float(node.text)
def visit_block_ending(self, node, visited_children):
pass
def generic_visit(self, node, visited_children):
""" The generic visit method. """
return visited_children or node

View File

@@ -1,74 +0,0 @@
from parsimonious.grammar import Grammar
protools_text_export_grammar = Grammar(
r"""
document = header files_section? clips_section? plugin_listing? track_listing? markers_listing?
header = "SESSION NAME:" fs string_value rs
"SAMPLE RATE:" fs float_value rs
"BIT DEPTH:" fs integer_value "-bit" rs
"SESSION START TIMECODE:" fs string_value rs
"TIMECODE FORMAT:" fs frame_rate " Drop"? " Frame" rs
"# OF AUDIO TRACKS:" fs integer_value rs
"# OF AUDIO CLIPS:" fs integer_value rs
"# OF AUDIO FILES:" fs integer_value rs block_ending
frame_rate = ("60" / "59.94" / "30" / "29.97" / "25" / "24" / "23.976")
files_section = files_header files_column_header file_record* block_ending
files_header = "F I L E S I N S E S S I O N" rs
files_column_header = "Filename" isp fs "Location" rs
file_record = string_value fs string_value rs
clips_section = clips_header clips_column_header clip_record* block_ending
clips_header = "O N L I N E C L I P S I N S E S S I O N" rs
clips_column_header = string_value fs string_value rs
clip_record = string_value fs string_value (fs "[" integer_value "]")? rs
plugin_listing = plugin_header plugin_column_header plugin_record* block_ending
plugin_header = "P L U G - I N S L I S T I N G" rs
plugin_column_header = "MANUFACTURER " fs "PLUG-IN NAME " fs
"VERSION " fs "FORMAT " fs "STEMS " fs
"NUMBER OF INSTANCES" rs
plugin_record = string_value fs string_value fs string_value fs
string_value fs string_value fs string_value rs
track_listing = track_listing_header track_block*
track_block = track_list_top ( track_clip_entry / block_ending )*
track_listing_header = "T R A C K L I S T I N G" rs
track_list_top = "TRACK NAME:" fs string_value rs
"COMMENTS:" fs string_value rs
"USER DELAY:" fs integer_value " Samples" rs
"STATE: " track_state_list rs
("PLUG-INS: " ( fs string_value )* rs)?
"CHANNEL " fs "EVENT " fs "CLIP NAME " fs
"START TIME " fs "END TIME " fs "DURATION " fs
("TIMESTAMP " fs)? "STATE" rs
track_state_list = (track_state " ")*
track_state = "Solo" / "Muted" / "Inactive" / "Hidden"
track_clip_entry = integer_value isp fs
integer_value isp fs
string_value fs
string_value fs string_value fs string_value fs (string_value fs)?
track_clip_state rs
track_clip_state = ("Muted" / "Unmuted")
markers_listing = markers_listing_header markers_column_header marker_record*
markers_listing_header = "M A R K E R S L I S T I N G" rs
markers_column_header = "# " fs "LOCATION " fs "TIME REFERENCE " fs
"UNITS " fs "NAME " fs "COMMENTS" rs
marker_record = integer_value isp fs string_value fs integer_value isp fs
string_value fs string_value fs string_value rs
fs = "\t"
rs = "\n"
block_ending = rs rs
string_value = ~r"[^\t\n]*"
integer_value = ~r"\d+"
float_value = ~r"\d+(\.\d+)?"
isp = ~r"[^\d\t\n]*"
""")

View File

@@ -19,18 +19,30 @@ class Event:
class TagCompiler: class TagCompiler:
"""
Uses a `SessionDescriptor` as a data source to produce `Intermediate`
items.
"""
Intermediate = namedtuple('Intermediate', 'track_content track_tags track_comment_tags ' Intermediate = namedtuple('Intermediate',
'clip_content clip_tags clip_tag_mode start finish') 'track_content track_tags track_comment_tags '
'clip_content clip_tags clip_tag_mode start '
'finish')
session: doc_entity.SessionDescriptor session: doc_entity.SessionDescriptor
def compile_all_time_spans(self) -> List[Tuple[str, str, Fraction, Fraction]]: def compile_all_time_spans(self) -> List[Tuple[str, str, Fraction,
Fraction]]:
"""
:returns: A `List` of (key: str, value: str, start: Fraction,
finish: Fraction)
"""
ret_list = list() ret_list = list()
for element in self.parse_data(): for element in self.parse_data():
if element.clip_tag_mode == TagPreModes.TIMESPAN: if element.clip_tag_mode == TagPreModes.TIMESPAN:
for k in element.clip_tags.keys(): for k in element.clip_tags.keys():
ret_list.append((k, element.clip_tags[k], element.start, element.finish)) ret_list.append((k, element.clip_tags[k], element.start,
element.finish))
return ret_list return ret_list
@@ -61,22 +73,36 @@ class TagCompiler:
def compile_events(self) -> Iterator[Event]: def compile_events(self) -> Iterator[Event]:
step0 = self.parse_data() step0 = self.parse_data()
step1 = self.apply_appends(step0) step1 = self.filter_out_directives(step0)
step2 = self.collect_time_spans(step1) step2 = self.apply_appends(step1)
step3 = self.apply_tags(step2) step3 = self.collect_time_spans(step2)
for datum in step3: step4 = self.apply_tags(step3)
yield Event(clip_name=datum[0], track_name=datum[1], session_name=datum[2], for datum in step4:
tags=datum[3], start=datum[4], finish=datum[5]) yield Event(clip_name=datum[0], track_name=datum[1],
session_name=datum[2], tags=datum[3], start=datum[4],
finish=datum[5])
def _marker_tags(self, at): def _marker_tags(self, at):
retval = dict() retval = dict()
applicable = [(m, t) for (m, t) in self.session.markers_timed() if t <= at]
applicable = [(m, t) for (m, t) in
self.session.markers_timed() if t <= at]
for marker, _ in sorted(applicable, key=lambda x: x[1]): for marker, _ in sorted(applicable, key=lambda x: x[1]):
retval.update(parse_tags(marker.comments or "").tag_dict) retval.update(parse_tags(marker.comments or "").tag_dict)
retval.update(parse_tags(marker.name or "").tag_dict) retval.update(parse_tags(marker.name or "").tag_dict)
return retval return retval
def filter_out_directives(self,
clips: Iterator[Intermediate]) \
-> Iterator[Intermediate]:
for clip in clips:
if clip.clip_tag_mode == 'Directive':
continue
else:
yield clip
@staticmethod @staticmethod
def _coalesce_tags(clip_tags: dict, track_tags: dict, def _coalesce_tags(clip_tags: dict, track_tags: dict,
track_comment_tags: dict, track_comment_tags: dict,
@@ -101,29 +127,33 @@ class TagCompiler:
track_comments_parsed = parse_tags(track.comments) track_comments_parsed = parse_tags(track.comments)
clip_parsed = parse_tags(clip.clip_name) clip_parsed = parse_tags(clip.clip_name)
yield TagCompiler.Intermediate(track_content=track_parsed.content, yield TagCompiler.Intermediate(
track_tags=track_parsed.tag_dict, track_content=track_parsed.content,
track_comment_tags=track_comments_parsed.tag_dict, track_tags=track_parsed.tag_dict,
clip_content=clip_parsed.content, track_comment_tags=track_comments_parsed.tag_dict,
clip_tags=clip_parsed.tag_dict, clip_content=clip_parsed.content,
clip_tag_mode=clip_parsed.mode, clip_tags=clip_parsed.tag_dict,
start=start, finish=finish) clip_tag_mode=clip_parsed.mode,
start=start, finish=finish)
@staticmethod @staticmethod
def apply_appends(parsed: Iterator[Intermediate]) -> Iterator[Intermediate]: def apply_appends(parsed: Iterator[Intermediate]) -> \
Iterator[Intermediate]:
def should_append(a, b): def should_append(a, b):
return b.clip_tag_mode == TagPreModes.APPEND and b.start >= a.finish return b.clip_tag_mode == TagPreModes.APPEND and \
b.start >= a.finish
def do_append(a, b): def do_append(a, b):
merged_tags = dict(a.clip_tags) merged_tags = dict(a.clip_tags)
merged_tags.update(b.clip_tags) merged_tags.update(b.clip_tags)
return TagCompiler.Intermediate(track_content=a.track_content, return TagCompiler.Intermediate(
track_tags=a.track_tags, track_content=a.track_content,
track_comment_tags=a.track_comment_tags, track_tags=a.track_tags,
clip_content=a.clip_content + ' ' + b.clip_content, track_comment_tags=a.track_comment_tags,
clip_tags=merged_tags, clip_tag_mode=a.clip_tag_mode, clip_content=a.clip_content + ' ' + b.clip_content,
start=a.start, finish=b.finish) clip_tags=merged_tags, clip_tag_mode=a.clip_tag_mode,
start=a.start, finish=b.finish)
yield from apply_appends(parsed, should_append, do_append) yield from apply_appends(parsed, should_append, do_append)
@@ -142,12 +172,14 @@ class TagCompiler:
@staticmethod @staticmethod
def _time_span_tags(at_time: Fraction, applicable_spans) -> dict: def _time_span_tags(at_time: Fraction, applicable_spans) -> dict:
retval = dict() retval = dict()
for tags in reversed([a[0] for a in applicable_spans if a[1] <= at_time <= a[2]]): for tags in reversed([a[0] for a in applicable_spans
if a[1] <= at_time <= a[2]]):
retval.update(tags) retval.update(tags)
return retval return retval
def apply_tags(self, parsed_with_time_spans) -> Iterator[Tuple[str, str, str, dict, Fraction, Fraction]]: def apply_tags(self, parsed_with_time_spans) ->\
Iterator[Tuple[str, str, str, dict, Fraction, Fraction]]:
session_parsed = parse_tags(self.session.header.session_name) session_parsed = parse_tags(self.session.header.session_name)
@@ -155,14 +187,16 @@ class TagCompiler:
event: 'TagCompiler.Intermediate' event: 'TagCompiler.Intermediate'
marker_tags = self._marker_tags(event.start) marker_tags = self._marker_tags(event.start)
time_span_tags = self._time_span_tags(event.start, time_spans) time_span_tags = self._time_span_tags(event.start, time_spans)
tags = self._coalesce_tags(clip_tags=event.clip_tags, tags = self._coalesce_tags(
track_tags=event.track_tags, clip_tags=event.clip_tags,
track_comment_tags=event.track_comment_tags, track_tags=event.track_tags,
timespan_tags=time_span_tags, track_comment_tags=event.track_comment_tags,
marker_tags=marker_tags, timespan_tags=time_span_tags,
session_tags=session_parsed.tag_dict) marker_tags=marker_tags,
session_tags=session_parsed.tag_dict)
yield event.clip_content, event.track_content, session_parsed.content, tags, event.start, event.finish yield (event.clip_content, event.track_content,
session_parsed.content, tags, event.start, event.finish)
def apply_appends(source: Iterator, def apply_appends(source: Iterator,

View File

@@ -48,7 +48,8 @@ class TagMapping:
for rule in rules: for rule in rules:
if rule.target in done: if rule.target in done:
continue continue
if rule.apply(tags, clip_content, track_content, session_content, to): if rule.apply(tags, clip_content, track_content, session_content,
to):
done.update(rule.target) done.update(rule.target)
def __init__(self, source: str, def __init__(self, source: str,

View File

@@ -1,5 +1,5 @@
from parsimonious import NodeVisitor, Grammar from parsimonious import NodeVisitor, Grammar
from typing import Dict, Optional from typing import Dict
from enum import Enum from enum import Enum
@@ -7,6 +7,7 @@ class TagPreModes(Enum):
NORMAL = 'Normal' NORMAL = 'Normal'
APPEND = 'Append' APPEND = 'Append'
TIMESPAN = 'Timespan' TIMESPAN = 'Timespan'
DIRECTIVE = 'Directive'
tag_grammar = Grammar( tag_grammar = Grammar(
@@ -23,7 +24,7 @@ tag_grammar = Grammar(
tag_junk = word word_sep? tag_junk = word word_sep?
word = ~r"[^ \[\{\$][^ ]*" word = ~r"[^ \[\{\$][^ ]*"
word_sep = ~r" +" word_sep = ~r" +"
modifier = ("@" / "&") word_sep? modifier = ("@" / "&" /"!") word_sep?
""" """
) )
@@ -51,8 +52,9 @@ class TagListVisitor(NodeVisitor):
modifier_opt, line_opt, _, tag_list_opt = visited_children modifier_opt, line_opt, _, tag_list_opt = visited_children
return TaggedStringResult(content=next(iter(line_opt), None), return TaggedStringResult(content=next(iter(line_opt), None),
tag_dict=next(iter(tag_list_opt), None), tag_dict=next(iter(tag_list_opt), dict()),
mode=TagPreModes(next(iter(modifier_opt), 'Normal')) mode=TagPreModes(
next(iter(modifier_opt), 'Normal'))
) )
@staticmethod @staticmethod
@@ -65,6 +67,8 @@ class TagListVisitor(NodeVisitor):
return TagPreModes.TIMESPAN return TagPreModes.TIMESPAN
elif node.text.startswith('&'): elif node.text.startswith('&'):
return TagPreModes.APPEND return TagPreModes.APPEND
elif node.text.startswith('!'):
return TagPreModes.DIRECTIVE
else: else:
return TagPreModes.NORMAL return TagPreModes.NORMAL

View File

@@ -1,8 +1,20 @@
"""
Methods for converting string reprentations of film footage.
"""
from fractions import Fraction from fractions import Fraction
import re import re
from typing import Optional from typing import Optional
def footage_to_seconds(footage: str) -> Optional[Fraction]: def footage_to_seconds(footage: str) -> Optional[Fraction]:
"""
Converts a string representation of a footage (35mm, 24fps)
into a :class:`Fraction`, this fraction being a some number of
seconds.
:param footage: A string reprenentation of a footage of the form
resembling "90+01".
"""
m = re.match(r'(\d+)\+(\d+)(\.\d+)?', footage) m = re.match(r'(\d+)\+(\d+)(\.\d+)?', footage)
if m is None: if m is None:
return None return None

View File

@@ -1,12 +1,14 @@
#import ffmpeg # ffmpeg-python # import ffmpeg # ffmpeg-python
# TODO: Implement movie export # TODO: Implement movie export
# def create_movie(event): # def create_movie(event):
# start = event['Movie.Start_Offset_Seconds'] # start = event['Movie.Start_Offset_Seconds']
# duration = event['PT.Clip.Finish_Seconds'] - event['PT.Clip.Start_Seconds'] # duration = event['PT.Clip.Finish_Seconds'] -
# event['PT.Clip.Start_Seconds']
# input_movie = event['Movie.Filename'] # input_movie = event['Movie.Filename']
# print("Will make movie starting at {}, dur {} from movie {}".format(start, duration, input_movie)) # print("Will make movie starting at {}, dur {} from movie {}"
# .format(start, duration, input_movie))
# #
# #
# def export_movies(events): # def export_movies(events):

View File

@@ -17,6 +17,8 @@ from typing import List
# This is from https://code.activestate.com/recipes/576832/ for # This is from https://code.activestate.com/recipes/576832/ for
# generating page count messages # generating page count messages
class ReportCanvas(canvas.Canvas): class ReportCanvas(canvas.Canvas):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
canvas.Canvas.__init__(self, *args, **kwargs) canvas.Canvas.__init__(self, *args, **kwargs)
@@ -38,10 +40,12 @@ class ReportCanvas(canvas.Canvas):
def draw_page_number(self, page_count): def draw_page_number(self, page_count):
self.saveState() self.saveState()
self.setFont('Helvetica', 10) #FIXME make this customizable self.setFont('Helvetica', 10) # FIXME make this customizable
self.drawString(0.5 * inch, 0.5 * inch, "Page %d of %d" % (self._pageNumber, page_count)) self.drawString(0.5 * inch, 0.5 * inch,
"Page %d of %d" % (self._pageNumber, page_count))
right_edge = self._pagesize[0] - 0.5 * inch right_edge = self._pagesize[0] - 0.5 * inch
self.drawRightString(right_edge, 0.5 * inch, self._report_date.strftime("%m/%d/%Y %H:%M")) self.drawRightString(right_edge, 0.5 * inch,
self._report_date.strftime("%m/%d/%Y %H:%M"))
top_line = self.beginPath() top_line = self.beginPath()
top_line.moveTo(0.5 * inch, 0.75 * inch) top_line.moveTo(0.5 * inch, 0.75 * inch)
@@ -74,16 +78,18 @@ def make_doc_template(page_size, filename, document_title,
footer_box, page_box = page_box.split_y(0.25 * inch, direction='u') footer_box, page_box = page_box.split_y(0.25 * inch, direction='u')
header_box, page_box = page_box.split_y(0.75 * inch, direction='d') header_box, page_box = page_box.split_y(0.75 * inch, direction='d')
title_box, report_box = header_box.split_x(3.5 * inch, direction='r') title_box, report_box = header_box.split_x(3.5 * inch, direction='r')
on_page_lambda = (lambda c, _: on_page_lambda = (lambda c, _:
draw_header_footer(c, report_box, title_box, draw_header_footer(c, report_box, title_box,
footer_box,title=title, footer_box, title=title,
supervisor=supervisor, supervisor=supervisor,
document_subheader=document_subheader, document_subheader=document_subheader,
client=client, doc_title=document_header)) client=client,
doc_title=document_header))
frames = [Frame(page_box.min_x, page_box.min_y, page_box.width, page_box.height)]
frames = [Frame(page_box.min_x, page_box.min_y,
page_box.width, page_box.height)]
page_template = PageTemplate(id="Main", page_template = PageTemplate(id="Main",
frames=frames, frames=frames,
onPage=on_page_lambda) onPage=on_page_lambda)
@@ -119,12 +125,17 @@ def time_format(mins, zero_str="-"):
return "%i:%02i" % (hh, mm) return "%i:%02i" % (hh, mm)
def draw_header_footer(a_canvas: ReportCanvas, left_box, right_box, footer_box, title: str, supervisor: str, def draw_header_footer(a_canvas: ReportCanvas, left_box, right_box,
document_subheader: str, client: str, doc_title="", font_name='Helvetica'): footer_box, title: str, supervisor: str,
document_subheader: str, client: str, doc_title="",
font_name='Helvetica'):
(_supervisor_box, client_box,), title_box = right_box.divide_y([16., 16., ]) (_supervisor_box, client_box,), title_box = \
title_box.draw_text_cell(a_canvas, title, font_name, 18, inset_y=2., inset_x=5.) right_box.divide_y([16., 16., ])
client_box.draw_text_cell(a_canvas, client, font_name, 11, inset_y=2., inset_x=5.) title_box.draw_text_cell(a_canvas, title, font_name, 18,
inset_y=2., inset_x=5.)
client_box.draw_text_cell(a_canvas, client, font_name, 11,
inset_y=2., inset_x=5.)
a_canvas.saveState() a_canvas.saveState()
a_canvas.setLineWidth(0.5) a_canvas.setLineWidth(0.5)
@@ -139,16 +150,20 @@ def draw_header_footer(a_canvas: ReportCanvas, left_box, right_box, footer_box,
a_canvas.drawPath(tline2) a_canvas.drawPath(tline2)
a_canvas.restoreState() a_canvas.restoreState()
(doc_title_cell, spotting_version_cell,), _ = left_box.divide_y([18., 14], direction='d') (doc_title_cell, spotting_version_cell,), _ = \
left_box.divide_y([18., 14], direction='d')
doc_title_cell.draw_text_cell(a_canvas, doc_title, font_name, 14., inset_y=2.) doc_title_cell.draw_text_cell(a_canvas, doc_title, font_name, 14.,
inset_y=2.)
if document_subheader is not None: if document_subheader is not None:
spotting_version_cell.draw_text_cell(a_canvas, document_subheader, font_name, 12., inset_y=2.) spotting_version_cell.draw_text_cell(a_canvas, document_subheader,
font_name, 12., inset_y=2.)
if supervisor is not None: if supervisor is not None:
a_canvas.setFont(font_name, 11.) a_canvas.setFont(font_name, 11.)
a_canvas.drawCentredString(footer_box.min_x + footer_box.width / 2., footer_box.min_y, supervisor) a_canvas.drawCentredString(footer_box.min_x + footer_box.width / 2.,
footer_box.min_y, supervisor)
class GRect: class GRect:
@@ -201,10 +216,12 @@ class GRect:
else: else:
if direction == 'l': if direction == 'l':
return (GRect(self.min_x, self.min_y, at, self.height), return (GRect(self.min_x, self.min_y, at, self.height),
GRect(self.min_x + at, self.y, self.width - at, self.height)) GRect(self.min_x + at, self.y,
self.width - at, self.height))
else: else:
return (GRect(self.max_x - at, self.y, at, self.height), return (GRect(self.max_x - at, self.y, at, self.height),
GRect(self.min_x, self.y, self.width - at, self.height)) GRect(self.min_x, self.y,
self.width - at, self.height))
def split_y(self, at, direction='u'): def split_y(self, at, direction='u'):
if at >= self.height: if at >= self.height:
@@ -214,19 +231,23 @@ class GRect:
else: else:
if direction == 'u': if direction == 'u':
return (GRect(self.x, self.y, self.width, at), return (GRect(self.x, self.y, self.width, at),
GRect(self.x, self.y + at, self.width, self.height - at)) GRect(self.x, self.y + at,
self.width, self.height - at))
else: else:
return (GRect(self.x, self.max_y - at, self.width, at), return (GRect(self.x, self.max_y - at, self.width, at),
GRect(self.x, self.y, self.width, self.height - at)) GRect(self.x, self.y,
self.width, self.height - at))
def inset_xy(self, dx, dy): def inset_xy(self, dx, dy):
return GRect(self.x + dx, self.y + dy, self.width - dx * 2, self.height - dy * 2) return GRect(self.x + dx, self.y + dy,
self.width - dx * 2, self.height - dy * 2)
def inset(self, d): def inset(self, d):
return self.inset_xy(d, d) return self.inset_xy(d, d)
def __repr__(self): def __repr__(self):
return "<GRect x=%f y=%f width=%f height=%f>" % (self.x, self.y, self.width, self.height) return "<GRect x=%f y=%f width=%f height=%f>" % \
(self.x, self.y, self.width, self.height)
def divide_x(self, x_list, direction='l'): def divide_x(self, x_list, direction='l'):
ret_list = list() ret_list = list()
@@ -259,13 +280,17 @@ class GRect:
def draw_border_impl(en): def draw_border_impl(en):
if en == 'min_x': if en == 'min_x':
coordinates = ((self.min_x, self.min_y), (self.min_x, self.max_y)) coordinates = ((self.min_x, self.min_y),
(self.min_x, self.max_y))
elif en == 'max_x': elif en == 'max_x':
coordinates = ((self.max_x, self.min_y), (self.max_x, self.max_y)) coordinates = ((self.max_x, self.min_y),
(self.max_x, self.max_y))
elif en == 'min_y': elif en == 'min_y':
coordinates = ((self.min_x, self.min_y), (self.max_x, self.min_y)) coordinates = ((self.min_x, self.min_y),
(self.max_x, self.min_y))
elif en == 'max_y': elif en == 'max_y':
coordinates = ((self.min_x, self.max_y), (self.max_x, self.max_y)) coordinates = ((self.min_x, self.max_y),
(self.max_x, self.max_y))
else: else:
return return

View File

@@ -4,7 +4,7 @@ from typing import Tuple, List
from reportlab.lib.pagesizes import portrait, letter from reportlab.lib.pagesizes import portrait, letter
from reportlab.lib.styles import getSampleStyleSheet from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib.units import inch from reportlab.lib.units import inch
from reportlab.platypus import Paragraph, Table, Spacer from reportlab.platypus import Paragraph, Table
from ptulsconv.broadcast_timecode import TimecodeFormat from ptulsconv.broadcast_timecode import TimecodeFormat
from ptulsconv.pdf import make_doc_template from ptulsconv.pdf import make_doc_template
@@ -12,14 +12,15 @@ from ptulsconv.pdf import make_doc_template
# TODO: A Continuity # TODO: A Continuity
def table_for_scene(scene, tc_format, font_name = 'Helvetica'): def table_for_scene(scene, tc_format, font_name='Helvetica'):
scene_style = getSampleStyleSheet()['Normal'] scene_style = getSampleStyleSheet()['Normal']
scene_style.fontName = font_name scene_style.fontName = font_name
scene_style.leftIndent = 0. scene_style.leftIndent = 0.
scene_style.leftPadding = 0. scene_style.leftPadding = 0.
scene_style.spaceAfter = 18. scene_style.spaceAfter = 18.
tc_data = "<em>%s</em><br />%s" % (tc_format.seconds_to_smpte(scene[2]), tc_format.seconds_to_smpte(scene[3])) tc_data = "<em>%s</em><br />%s" % (tc_format.seconds_to_smpte(scene[2]),
tc_format.seconds_to_smpte(scene[3]))
row = [ row = [
Paragraph(tc_data, scene_style), Paragraph(tc_data, scene_style),
@@ -36,11 +37,11 @@ def table_for_scene(scene, tc_format, font_name = 'Helvetica'):
def output_report(scenes: List[Tuple[str, str, Fraction, Fraction]], def output_report(scenes: List[Tuple[str, str, Fraction, Fraction]],
tc_display_format: TimecodeFormat, tc_display_format: TimecodeFormat,
title: str, client: str, supervisor): title: str, client: str, supervisor, paper_size=letter):
filename = "%s Continuity.pdf" % title filename = "%s Continuity.pdf" % title
document_header = "Continuity" document_header = "Continuity"
doc = make_doc_template(page_size=portrait(letter), doc = make_doc_template(page_size=portrait(paper_size),
filename=filename, filename=filename,
document_title="Continuity", document_title="Continuity",
title=title, title=title,

View File

@@ -1,7 +1,7 @@
from typing import List, Optional from typing import List, Optional
from reportlab.pdfbase import pdfmetrics # from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont # from reportlab.pdfbase.ttfonts import TTFont
from reportlab.lib.units import inch from reportlab.lib.units import inch
from reportlab.lib.pagesizes import letter, portrait from reportlab.lib.pagesizes import letter, portrait
@@ -14,9 +14,12 @@ from .__init__ import time_format, make_doc_template
from ..docparser.adr_entity import ADRLine from ..docparser.adr_entity import ADRLine
def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]], show_priorities=False, include_omitted=False): def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]],
show_priorities=False, include_omitted=False):
columns = list() columns = list()
reel_numbers = reel_list or sorted(set([x.reel for x in lines if x.reel is not None])) reel_numbers = reel_list or sorted(
set([x.reel for x in lines if x.reel is not None])
)
num_column_width = 15. / 32. * inch num_column_width = 15. / 32. * inch
@@ -33,7 +36,10 @@ def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]], show_pri
'heading': 'Role', 'heading': 'Role',
'value_getter': lambda recs: recs[0].character_name, 'value_getter': lambda recs: recs[0].character_name,
'value_getter2': lambda recs: recs[0].actor_name or "", 'value_getter2': lambda recs: recs[0].actor_name or "",
'style_getter': lambda col_index: [('LINEAFTER', (col_index, 0), (col_index, -1), 1.0, colors.black)], 'style_getter': lambda col_index: [('LINEAFTER',
(col_index, 0),
(col_index, -1),
1.0, colors.black)],
'width': 1.75 * inch, 'width': 1.75 * inch,
'summarize': False 'summarize': False
}) })
@@ -41,30 +47,48 @@ def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]], show_pri
columns.append({ columns.append({
'heading': 'TV', 'heading': 'TV',
'value_getter': lambda recs: len([r for r in recs if r.tv]), 'value_getter': lambda recs: len([r for r in recs if r.tv]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0. 'value_getter2': (lambda recs:
for r in recs if r.tv])), time_format(sum([r.time_budget_mins or 0.
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER'), for r in recs if r.tv]))
('LINEBEFORE', (col_index, 0), (col_index, -1), 1., colors.black), ),
('LINEAFTER', (col_index, 0), (col_index, -1), .5, colors.gray)], 'style_getter': (lambda col_index:
[('ALIGN', (col_index, 0), (col_index, -1),
'CENTER'),
('LINEBEFORE', (col_index, 0), (col_index, -1),
1., colors.black),
('LINEAFTER', (col_index, 0), (col_index, -1),
.5, colors.gray)]
),
'width': num_column_width 'width': num_column_width
}) })
columns.append({ columns.append({
'heading': 'Opt', 'heading': 'Opt',
'value_getter': lambda recs: len([r for r in recs if r.optional]), 'value_getter': lambda recs: len([r for r in recs if r.optional]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0. 'value_getter2': (lambda recs:
for r in recs if r.optional])), time_format(sum([r.time_budget_mins or 0.
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER'), for r in recs if r.optional]))
('LINEAFTER', (col_index, 0), (col_index, -1), .5, colors.gray)], ),
'style_getter': (lambda col_index:
[('ALIGN', (col_index, 0), (col_index, -1),
'CENTER'),
('LINEAFTER', (col_index, 0), (col_index, -1),
.5, colors.gray)]
),
'width': num_column_width 'width': num_column_width
}) })
columns.append({ columns.append({
'heading': 'Eff', 'heading': 'Eff',
'value_getter': lambda recs: len([r for r in recs if r.effort]), 'value_getter': lambda recs: len([r for r in recs if r.effort]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0. 'value_getter2': (lambda recs:
for r in recs if r.effort])), time_format(sum([r.time_budget_mins or 0.
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')], for r in recs if r.effort]))
),
'style_getter': (lambda col_index:
[('ALIGN', (col_index, 0), (col_index, -1),
'CENTER')]
),
'width': num_column_width 'width': num_column_width
}) })
@@ -80,23 +104,26 @@ def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]], show_pri
}) })
if len(reel_numbers) > 0: if len(reel_numbers) > 0:
# columns.append({
# 'heading': 'RX',
# 'value_getter': lambda recs: blank_len([r for r in recs if 'Reel' not in r.keys()]),
# 'value_getter2': lambda recs: time_format(sum([r.get('Time Budget Mins', 0.) for r in recs
# if 'Reel' not in r.keys()])),
# 'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')],
# 'width': num_column_width
# })
for n in reel_numbers: for n in reel_numbers:
columns.append({ columns.append({
'heading': n, 'heading': n,
'value_getter': lambda recs, n1=n: len([r for r in recs if r.reel == n1]), 'value_getter': (lambda recs, n1=n:
'value_getter2': lambda recs, n1=n: time_format(sum([r.time_budget_mins or 0. for r len([r for r in recs if r.reel == n1])
in recs if r.reel == n1])), ),
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER'), 'value_getter2': (lambda recs, n1=n:
('LINEAFTER', (col_index, 0), (col_index, -1), .5, colors.gray)], time_format(sum([r.time_budget_mins or 0.
for r in recs
if r.reel == n1]))
),
'style_getter': (lambda col_index:
[('ALIGN', (col_index, 0), (col_index, -1),
'CENTER'),
('LINEAFTER', (col_index, 0),
(col_index, -1),
.5, colors.gray)]
),
'width': num_column_width 'width': num_column_width
}) })
@@ -104,18 +131,26 @@ def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]], show_pri
for n in range(1, 6,): for n in range(1, 6,):
columns.append({ columns.append({
'heading': 'P%i' % n, 'heading': 'P%i' % n,
'value_getter': lambda recs: len([r for r in recs if r.priority == n]), 'value_getter': lambda recs: len([r for r in recs
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0. if r.priority == n]),
for r in recs if r.priority == n])), 'value_getter2': (lambda recs:
time_format(sum([r.time_budget_mins or 0.
for r in recs
if r.priority == n]))
),
'style_getter': lambda col_index: [], 'style_getter': lambda col_index: [],
'width': num_column_width 'width': num_column_width
}) })
columns.append({ columns.append({
'heading': '>P5', 'heading': '>P5',
'value_getter': lambda recs: len([r for r in recs if (r.priority or 5) > 5]), 'value_getter': lambda recs: len([r for r in recs
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0. if (r.priority or 5) > 5]),
for r in recs if (r.priority or 5) > 5])), 'value_getter2': (lambda recs:
time_format(sum([r.time_budget_mins or 0.
for r in recs
if (r.priority or 5) > 5]))
),
'style_getter': lambda col_index: [], 'style_getter': lambda col_index: [],
'width': num_column_width 'width': num_column_width
}) })
@@ -124,32 +159,47 @@ def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]], show_pri
columns.append({ columns.append({
'heading': 'Omit', 'heading': 'Omit',
'value_getter': lambda recs: len([r for r in recs if r.omitted]), 'value_getter': lambda recs: len([r for r in recs if r.omitted]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0. 'value_getter2': (lambda recs:
for r in recs if r.omitted])), time_format(sum([r.time_budget_mins or 0.
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')], for r in recs if r.omitted]))),
'style_getter': (lambda col_index:
[('ALIGN', (col_index, 0), (col_index, -1),
'CENTER')]
),
'width': num_column_width 'width': num_column_width
}) })
columns.append({ columns.append({
'heading': 'Total', 'heading': 'Total',
'value_getter': lambda recs: len([r for r in recs if not r.omitted]), 'value_getter': lambda recs: len([r for r in recs if not r.omitted]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0. 'value_getter2': (lambda recs:
for r in recs if not r.omitted]), zero_str=None), time_format(
'style_getter': lambda col_index: [('LINEBEFORE', (col_index, 0), (col_index, -1), 1.0, colors.black), sum([r.time_budget_mins or 0.
('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')],
for r in recs if not r.omitted])
)
),
'style_getter': (lambda col_index:
[('LINEBEFORE', (col_index, 0), (col_index, -1),
1.0, colors.black),
('ALIGN', (col_index, 0), (col_index, -1),
'CENTER')]
),
'width': 0.5 * inch 'width': 0.5 * inch
}) })
return columns return columns
def populate_columns(lines: List[ADRLine], columns, include_omitted, _page_size): def populate_columns(lines: List[ADRLine], columns, include_omitted,
_page_size):
data = list() data = list()
styles = list() styles = list()
columns_widths = list() columns_widths = list()
sorted_character_numbers: List[str] = sorted(set([x.character_id for x in lines]), sorted_character_numbers: List[str] = sorted(
key=lambda x: str(x)) set([x.character_id for x in lines]),
key=lambda x: str(x))
# construct column styles # construct column styles
@@ -174,8 +224,10 @@ def populate_columns(lines: List[ADRLine], columns, include_omitted, _page_size)
row_data.append(col['value_getter'](list(char_records))) row_data.append(col['value_getter'](list(char_records)))
row_data2.append(col['value_getter2'](list(char_records))) row_data2.append(col['value_getter2'](list(char_records)))
styles.extend([('TEXTCOLOR', (0, row2_index), (-1, row2_index), colors.red), styles.extend([('TEXTCOLOR', (0, row2_index), (-1, row2_index),
('LINEBELOW', (0, row2_index), (-1, row2_index), 0.5, colors.black)]) colors.red),
('LINEBELOW', (0, row2_index), (-1, row2_index),
0.5, colors.black)])
data.append(row_data) data.append(row_data)
data.append(row_data2) data.append(row_data2)
@@ -192,7 +244,8 @@ def populate_columns(lines: List[ADRLine], columns, include_omitted, _page_size)
summary_row1.append("") summary_row1.append("")
summary_row2.append("") summary_row2.append("")
styles.append(('LINEABOVE', (0, row1_index), (-1, row1_index), 2.0, colors.black)) styles.append(('LINEABOVE', (0, row1_index), (-1, row1_index), 2.0,
colors.black))
data.append(summary_row1) data.append(summary_row1)
data.append(summary_row2) data.append(summary_row2)
@@ -204,17 +257,20 @@ def populate_columns(lines: List[ADRLine], columns, include_omitted, _page_size)
# pass # pass
def output_report(lines: List[ADRLine], reel_list: List[str], include_omitted=False, def output_report(lines: List[ADRLine], reel_list: List[str],
page_size=portrait(letter), font_name='Helvetica'): include_omitted=False, page_size=portrait(letter),
columns = build_columns(lines, include_omitted=include_omitted, reel_list=reel_list) font_name='Helvetica'):
data, style, columns_widths = populate_columns(lines, columns, include_omitted, page_size) columns = build_columns(lines, include_omitted=include_omitted,
reel_list=reel_list)
data, style, columns_widths = populate_columns(lines, columns,
include_omitted, page_size)
style.append(('FONTNAME', (0, 0), (-1, -1), font_name)) style.append(('FONTNAME', (0, 0), (-1, -1), font_name))
style.append(('FONTSIZE', (0, 0), (-1, -1), 9.)) style.append(('FONTSIZE', (0, 0), (-1, -1), 9.))
style.append(('LINEBELOW', (0, 0), (-1, 0), 1.0, colors.black)) style.append(('LINEBELOW', (0, 0), (-1, 0), 1.0, colors.black))
# style.append(('LINEBELOW', (0, 1), (-1, -1), 0.25, colors.gray)) # style.append(('LINEBELOW', (0, 1), (-1, -1), 0.25, colors.gray))
#pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc')) # pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc'))
title = "%s Line Count" % lines[0].title title = "%s Line Count" % lines[0].title
filename = title + '.pdf' filename = title + '.pdf'
@@ -226,7 +282,8 @@ def output_report(lines: List[ADRLine], reel_list: List[str], include_omitted=Fa
document_header='Line Count') document_header='Line Count')
# header_data, header_style, header_widths = build_header(columns_widths) # header_data, header_style, header_widths = build_header(columns_widths)
# header_table = Table(data=header_data, style=header_style, colWidths=header_widths) # header_table = Table(data=header_data, style=header_style,
# colWidths=header_widths)
table = Table(data=data, style=style, colWidths=columns_widths) table = Table(data=data, style=style, colWidths=columns_widths)
@@ -241,6 +298,7 @@ def output_report(lines: List[ADRLine], reel_list: List[str], include_omitted=Fa
omitted_count = len([x for x in lines if x.omitted]) omitted_count = len([x for x in lines if x.omitted])
if not include_omitted and omitted_count > 0: if not include_omitted and omitted_count > 0:
story.append(Paragraph("* %i Omitted lines are excluded." % omitted_count, style)) story.append(Paragraph("* %i Omitted lines are excluded." %
omitted_count, style))
doc.build(story) doc.build(story)

View File

@@ -3,4 +3,4 @@
def output_report(records): def output_report(records):
# order by start # order by start
pass pass

View File

@@ -27,23 +27,28 @@ def build_aux_data_field(line: ADRLine):
tag_field = "" tag_field = ""
if line.effort: if line.effort:
bg_color = 'red' bg_color = 'red'
tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " % (bg_color, fg_color, "EFF") tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " \
% (bg_color, fg_color, "EFF")
elif line.tv: elif line.tv:
bg_color = 'blue' bg_color = 'blue'
tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " % (bg_color, fg_color, "TV") tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " \
% (bg_color, fg_color, "TV")
elif line.adlib: elif line.adlib:
bg_color = 'purple' bg_color = 'purple'
tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " % (bg_color, fg_color, "ADLIB") tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " \
% (bg_color, fg_color, "ADLIB")
elif line.optional: elif line.optional:
bg_color = 'green' bg_color = 'green'
tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font>" % (bg_color, fg_color, "OPTIONAL") tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font>" \
% (bg_color, fg_color, "OPTIONAL")
entries.append(tag_field) entries.append(tag_field)
return "<br />".join(entries) return "<br />".join(entries)
def build_story(lines: List[ADRLine], tc_rate: TimecodeFormat, font_name='Helvetica'): def build_story(lines: List[ADRLine], tc_rate: TimecodeFormat,
font_name='Helvetica'):
story = list() story = list()
this_scene = None this_scene = None
@@ -60,7 +65,8 @@ def build_story(lines: List[ADRLine], tc_rate: TimecodeFormat, font_name='Helvet
('LEFTPADDING', (0, 0), (0, 0), 0.0), ('LEFTPADDING', (0, 0), (0, 0), 0.0),
('BOTTOMPADDING', (0, 0), (-1, -1), 24.)] ('BOTTOMPADDING', (0, 0), (-1, -1), 24.)]
cue_number_field = "%s<br /><font fontSize=7>%s</font>" % (line.cue_number, line.character_name) cue_number_field = "%s<br /><font fontSize=7>%s</font>" \
% (line.cue_number, line.character_name)
time_data = time_format(line.time_budget_mins) time_data = time_format(line.time_budget_mins)
@@ -79,7 +85,8 @@ def build_story(lines: List[ADRLine], tc_rate: TimecodeFormat, font_name='Helvet
]] ]]
line_table = Table(data=line_table_data, line_table = Table(data=line_table_data,
colWidths=[inch * 0.75, inch, inch * 3., 0.5 * inch, inch * 2.], colWidths=[inch * 0.75, inch, inch * 3., 0.5 * inch,
inch * 2.],
style=table_style) style=table_style)
if (line.scene or "[No Scene]") != this_scene: if (line.scene or "[No Scene]") != this_scene:
@@ -97,7 +104,7 @@ def build_story(lines: List[ADRLine], tc_rate: TimecodeFormat, font_name='Helvet
def build_tc_data(line: ADRLine, tc_format: TimecodeFormat): def build_tc_data(line: ADRLine, tc_format: TimecodeFormat):
tc_data = tc_format.seconds_to_smpte(line.start) + "<br />" + \ tc_data = tc_format.seconds_to_smpte(line.start) + "<br />" + \
tc_format.seconds_to_smpte(line.finish) tc_format.seconds_to_smpte(line.finish)
third_line = [] third_line = []
if line.reel is not None: if line.reel is not None:
if line.reel[0:1] == 'R': if line.reel[0:1] == 'R':
@@ -111,11 +118,12 @@ def build_tc_data(line: ADRLine, tc_format: TimecodeFormat):
return tc_data return tc_data
def generate_report(page_size, lines: List[ADRLine], tc_rate: TimecodeFormat, character_number=None, def generate_report(page_size, lines: List[ADRLine], tc_rate: TimecodeFormat,
include_omitted=True): character_number=None, include_omitted=True):
if character_number is not None: if character_number is not None:
lines = [r for r in lines if r.character_id == character_number] lines = [r for r in lines if r.character_id == character_number]
title = "%s ADR Report (%s)" % (lines[0].title, lines[0].character_name) title = "%s ADR Report (%s)" % (lines[0].title,
lines[0].character_name)
document_header = "%s ADR Report" % lines[0].character_name document_header = "%s ADR Report" % lines[0].character_name
else: else:
title = "%s ADR Report" % lines[0].title title = "%s ADR Report" % lines[0].title

View File

@@ -1,7 +1,7 @@
from reportlab.pdfgen.canvas import Canvas from reportlab.pdfgen.canvas import Canvas
from reportlab.pdfbase import pdfmetrics # from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont # from reportlab.pdfbase.ttfonts import TTFont
from reportlab.lib.units import inch from reportlab.lib.units import inch
from reportlab.lib.pagesizes import letter from reportlab.lib.pagesizes import letter
@@ -11,20 +11,23 @@ from reportlab.platypus import Paragraph
from .__init__ import GRect from .__init__ import GRect
from ptulsconv.broadcast_timecode import TimecodeFormat, footage_to_frame_count from ptulsconv.broadcast_timecode import TimecodeFormat
from ptulsconv.docparser.adr_entity import ADRLine from ptulsconv.docparser.adr_entity import ADRLine
import datetime import datetime
font_name = 'Helvetica' font_name = 'Helvetica'
def draw_header_block(canvas, rect, record: ADRLine): def draw_header_block(canvas, rect, record: ADRLine):
rect.draw_text_cell(canvas, record.cue_number, "Helvetica", 44, vertical_align='m') rect.draw_text_cell(canvas, record.cue_number, "Helvetica", 44,
vertical_align='m')
def draw_character_row(canvas, rect, record: ADRLine): def draw_character_row(canvas, rect, record: ADRLine):
label_frame, value_frame = rect.split_x(1.25 * inch) label_frame, value_frame = rect.split_x(1.25 * inch)
label_frame.draw_text_cell(canvas, "CHARACTER", font_name, 10, force_baseline=9.) label_frame.draw_text_cell(canvas, "CHARACTER", font_name, 10,
force_baseline=9.)
line = "%s / %s" % (record.character_id, record.character_name) line = "%s / %s" % (record.character_id, record.character_name)
if record.actor_name is not None: if record.actor_name is not None:
line = line + " / " + record.actor_name line = line + " / " + record.actor_name
@@ -33,7 +36,8 @@ def draw_character_row(canvas, rect, record: ADRLine):
def draw_cue_number_block(canvas, rect, record: ADRLine): def draw_cue_number_block(canvas, rect, record: ADRLine):
(label_frame, number_frame,), aux_frame = rect.divide_y([0.20 * inch, 0.375 * inch], direction='d') (label_frame, number_frame,), aux_frame = \
rect.divide_y([0.20 * inch, 0.375 * inch], direction='d')
label_frame.draw_text_cell(canvas, "CUE NUMBER", font_name, 10, label_frame.draw_text_cell(canvas, "CUE NUMBER", font_name, 10,
inset_y=5., vertical_align='t') inset_y=5., vertical_align='t')
number_frame.draw_text_cell(canvas, record.cue_number, font_name, 14, number_frame.draw_text_cell(canvas, record.cue_number, font_name, 14,
@@ -55,18 +59,25 @@ def draw_cue_number_block(canvas, rect, record: ADRLine):
rect.draw_border(canvas, 'max_x') rect.draw_border(canvas, 'max_x')
def draw_timecode_block(canvas, rect, record: ADRLine, tc_display_format: TimecodeFormat): def draw_timecode_block(canvas, rect, record: ADRLine,
tc_display_format: TimecodeFormat):
(in_label_frame, in_frame, out_label_frame, out_frame), _ = rect.divide_y( (in_label_frame, in_frame, out_label_frame, out_frame), _ = rect.divide_y(
[0.20 * inch, 0.25 * inch, 0.20 * inch, 0.25 * inch], direction='d') [0.20 * inch, 0.25 * inch, 0.20 * inch, 0.25 * inch], direction='d')
in_label_frame.draw_text_cell(canvas, "IN", font_name, 10, in_label_frame.draw_text_cell(canvas, "IN", font_name, 10,
vertical_align='t', inset_y=5., inset_x=5.) vertical_align='t', inset_y=5., inset_x=5.)
in_frame.draw_text_cell(canvas, tc_display_format.seconds_to_smpte(record.start), font_name, 14, in_frame.draw_text_cell(canvas,
inset_x=10., inset_y=2., draw_baseline=True) tc_display_format.seconds_to_smpte(record.start),
font_name, 14,
inset_x=10., inset_y=2.,
draw_baseline=True)
out_label_frame.draw_text_cell(canvas, "OUT", font_name, 10, out_label_frame.draw_text_cell(canvas, "OUT", font_name, 10,
vertical_align='t', inset_y=5., inset_x=5.) vertical_align='t', inset_y=5., inset_x=5.)
out_frame.draw_text_cell(canvas, tc_display_format.seconds_to_smpte(record.finish), font_name, 14, out_frame.draw_text_cell(canvas,
inset_x=10., inset_y=2., draw_baseline=True) tc_display_format.seconds_to_smpte(record.finish),
font_name, 14,
inset_x=10., inset_y=2.,
draw_baseline=True)
rect.draw_border(canvas, 'max_x') rect.draw_border(canvas, 'max_x')
@@ -91,13 +102,15 @@ def draw_reason_block(canvas, rect, record: ADRLine):
p = Paragraph(record.note or "", style) p = Paragraph(record.note or "", style)
notes_value.draw_flowable(canvas, p, draw_baselines=True, inset_x=5., inset_y=5.) notes_value.draw_flowable(canvas, p, draw_baselines=True,
inset_x=5., inset_y=5.)
def draw_prompt(canvas, rect, prompt=""): def draw_prompt(canvas, rect, prompt=""):
label, block = rect.split_y(0.20 * inch, direction='d') label, block = rect.split_y(0.20 * inch, direction='d')
label.draw_text_cell(canvas, "PROMPT", font_name, 10, vertical_align='t', inset_y=5., inset_x=0.) label.draw_text_cell(canvas, "PROMPT", font_name, 10, vertical_align='t',
inset_y=5., inset_x=0.)
style = getSampleStyleSheet()['BodyText'] style = getSampleStyleSheet()['BodyText']
style.fontName = font_name style.fontName = font_name
@@ -117,7 +130,8 @@ def draw_prompt(canvas, rect, prompt=""):
def draw_notes(canvas, rect, note=""): def draw_notes(canvas, rect, note=""):
label, block = rect.split_y(0.20 * inch, direction='d') label, block = rect.split_y(0.20 * inch, direction='d')
label.draw_text_cell(canvas, "NOTES", font_name, 10, vertical_align='t', inset_y=5., inset_x=0.) label.draw_text_cell(canvas, "NOTES", font_name, 10, vertical_align='t',
inset_y=5., inset_x=0.)
style = getSampleStyleSheet()['BodyText'] style = getSampleStyleSheet()['BodyText']
style.fontName = font_name style.fontName = font_name
@@ -169,31 +183,43 @@ def draw_take_grid(canvas, rect):
canvas.restoreState() canvas.restoreState()
def draw_aux_block(canvas, rect, recording_time_sec_this_line, recording_time_sec): def draw_aux_block(canvas, rect, recording_time_sec_this_line,
recording_time_sec):
rect.draw_border(canvas, 'min_x') rect.draw_border(canvas, 'min_x')
content_rect = rect.inset_xy(10., 10.) content_rect = rect.inset_xy(10., 10.)
lines, last_line = content_rect.divide_y([12., 12., 24., 24., 24., 24.], direction='d') lines, last_line = content_rect.divide_y([12., 12., 24., 24., 24., 24.],
direction='d')
lines[0].draw_text_cell(canvas, lines[0].draw_text_cell(canvas,
"Time for this line: %.1f mins" % (recording_time_sec_this_line / 60.), font_name, 9.) "Time for this line: %.1f mins" %
lines[1].draw_text_cell(canvas, "Running time: %03.1f mins" % (recording_time_sec / 60.), font_name, 9.) (recording_time_sec_this_line / 60.),
lines[2].draw_text_cell(canvas, "Actual Start: ______________", font_name, 9., vertical_align='b') font_name, 9.)
lines[3].draw_text_cell(canvas, "Record Date: ______________", font_name, 9., vertical_align='b') lines[1].draw_text_cell(canvas, "Running time: %03.1f mins" %
lines[4].draw_text_cell(canvas, "Engineer: ______________", font_name, 9., vertical_align='b') (recording_time_sec / 60.), font_name, 9.)
lines[5].draw_text_cell(canvas, "Location: ______________", font_name, 9., vertical_align='b') lines[2].draw_text_cell(canvas, "Actual Start: ______________",
font_name, 9., vertical_align='b')
lines[3].draw_text_cell(canvas, "Record Date: ______________",
font_name, 9., vertical_align='b')
lines[4].draw_text_cell(canvas, "Engineer: ______________",
font_name, 9., vertical_align='b')
lines[5].draw_text_cell(canvas, "Location: ______________",
font_name, 9., vertical_align='b')
def draw_footer(canvas, rect, record: ADRLine, report_date, line_no, total_lines): def draw_footer(canvas, rect, record: ADRLine, report_date, line_no,
total_lines):
rect.draw_border(canvas, 'max_y') rect.draw_border(canvas, 'max_y')
report_date_s = [report_date.strftime("%c")] report_date_s = [report_date.strftime("%c")]
spotting_name = [record.spot] if record.spot is not None else [] spotting_name = [record.spot] if record.spot is not None else []
pages_s = ["Line %i of %i" % (line_no, total_lines)] pages_s = ["Line %i of %i" % (line_no, total_lines)]
footer_s = " - ".join(report_date_s + spotting_name + pages_s) footer_s = " - ".join(report_date_s + spotting_name + pages_s)
rect.draw_text_cell(canvas, footer_s, font_name=font_name, font_size=10., inset_y=2.) rect.draw_text_cell(canvas, footer_s, font_name=font_name, font_size=10.,
inset_y=2.)
def create_report_for_character(records, report_date, tc_display_format: TimecodeFormat): def create_report_for_character(records, report_date,
tc_display_format: TimecodeFormat):
outfile = "%s_%s_%s_Log.pdf" % (records[0].title, outfile = "%s_%s_%s_Log.pdf" % (records[0].title,
records[0].character_id, records[0].character_id,
@@ -201,20 +227,24 @@ def create_report_for_character(records, report_date, tc_display_format: Timecod
assert outfile is not None assert outfile is not None
assert outfile[-4:] == '.pdf', "Output file must have 'pdf' extension!" assert outfile[-4:] == '.pdf', "Output file must have 'pdf' extension!"
#pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc')) # pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc'))
page: GRect = GRect(0, 0, letter[0], letter[1]) page: GRect = GRect(0, 0, letter[0], letter[1])
page = page.inset(inch * 0.5) page = page.inset(inch * 0.5)
(header_row, char_row, data_row, prompt_row, notes_row, takes_row), footer = \ (header_row, char_row, data_row,
page.divide_y([0.875 * inch, 0.375 * inch, inch, 3.0 * inch, 1.5 * inch, 3 * inch], direction='d') prompt_row, notes_row, takes_row), footer = \
page.divide_y([0.875 * inch, 0.375 * inch, inch,
3.0 * inch, 1.5 * inch, 3 * inch], direction='d')
cue_header_block, title_header_block = header_row.split_x(4.0 * inch) cue_header_block, title_header_block = header_row.split_x(4.0 * inch)
(cue_number_block, timecode_block), reason_block = data_row.divide_x([1.5 * inch, 1.5 * inch]) (cue_number_block, timecode_block), reason_block = \
data_row.divide_x([1.5 * inch, 1.5 * inch])
(take_grid_block), aux_block = takes_row.split_x(5.25 * inch) (take_grid_block), aux_block = takes_row.split_x(5.25 * inch)
c = Canvas(outfile, pagesize=letter,) c = Canvas(outfile, pagesize=letter,)
c.setTitle("%s %s (%s) Supervisor's Log" % (records[0].title, records[0].character_name, c.setTitle("%s %s (%s) Supervisor's Log" % (records[0].title,
records[0].character_name,
records[0].character_id)) records[0].character_id))
c.setAuthor(records[0].supervisor) c.setAuthor(records[0].supervisor)
@@ -223,7 +253,8 @@ def create_report_for_character(records, report_date, tc_display_format: Timecod
line_n = 1 line_n = 1
for record in records: for record in records:
record: ADRLine record: ADRLine
recording_time_sec_this_line: float = (record.time_budget_mins or 6.0) * 60.0 recording_time_sec_this_line: float = (
record.time_budget_mins or 6.0) * 60.0
recording_time_sec = recording_time_sec + recording_time_sec_this_line recording_time_sec = recording_time_sec + recording_time_sec_this_line
draw_header_block(c, cue_header_block, record) draw_header_block(c, cue_header_block, record)
@@ -233,14 +264,17 @@ def create_report_for_character(records, report_date, tc_display_format: Timecod
# draw_title_box(c, title_header_block, record) # draw_title_box(c, title_header_block, record)
draw_character_row(c, char_row, record) draw_character_row(c, char_row, record)
draw_cue_number_block(c, cue_number_block, record) draw_cue_number_block(c, cue_number_block, record)
draw_timecode_block(c, timecode_block, record, tc_display_format=tc_display_format) draw_timecode_block(c, timecode_block, record,
tc_display_format=tc_display_format)
draw_reason_block(c, reason_block, record) draw_reason_block(c, reason_block, record)
draw_prompt(c, prompt_row, prompt=record.prompt) draw_prompt(c, prompt_row, prompt=record.prompt or "")
draw_notes(c, notes_row, note="") draw_notes(c, notes_row, note="")
draw_take_grid(c, take_grid_block) draw_take_grid(c, take_grid_block)
draw_aux_block(c, aux_block, recording_time_sec_this_line, recording_time_sec) draw_aux_block(c, aux_block, recording_time_sec_this_line,
recording_time_sec)
draw_footer(c, footer, record, report_date, line_no=line_n, total_lines=total_lines) draw_footer(c, footer, record, report_date, line_no=line_n,
total_lines=total_lines)
line_n = line_n + 1 line_n = line_n + 1
c.showPage() c.showPage()
@@ -254,5 +288,6 @@ def output_report(lines, tc_display_format: TimecodeFormat):
character_numbers = set([x.character_id for x in lines]) character_numbers = set([x.character_id for x in lines])
for n in character_numbers: for n in character_numbers:
create_report_for_character([e for e in events if e.character_id == n], report_date, create_report_for_character([e for e in events if e.character_id == n],
report_date,
tc_display_format=tc_display_format) tc_display_format=tc_display_format)

View File

@@ -5,36 +5,42 @@ from .__init__ import make_doc_template
from reportlab.lib.units import inch from reportlab.lib.units import inch
from reportlab.lib.pagesizes import letter from reportlab.lib.pagesizes import letter
from reportlab.platypus import Paragraph, Spacer, KeepTogether, Table, HRFlowable from reportlab.platypus import Paragraph, Spacer, KeepTogether, Table, \
HRFlowable
from reportlab.lib.styles import getSampleStyleSheet from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib import colors from reportlab.lib import colors
from reportlab.pdfbase import pdfmetrics # from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont # from reportlab.pdfbase.ttfonts import TTFont
from ..broadcast_timecode import TimecodeFormat from ..broadcast_timecode import TimecodeFormat
from ..docparser.adr_entity import ADRLine from ..docparser.adr_entity import ADRLine
def output_report(lines: List[ADRLine], tc_display_format: TimecodeFormat, font_name="Helvetica"): def output_report(lines: List[ADRLine], tc_display_format: TimecodeFormat,
font_name="Helvetica"):
character_numbers = set([n.character_id for n in lines]) character_numbers = set([n.character_id for n in lines])
#pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc')) # pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc'))
for n in character_numbers: for n in character_numbers:
char_lines = [line for line in lines if not line.omitted and line.character_id == n] char_lines = [line for line in lines
if not line.omitted and line.character_id == n]
character_name = char_lines[0].character_name character_name = char_lines[0].character_name
char_lines = sorted(char_lines, key=lambda line: line.start) char_lines = sorted(char_lines, key=lambda line: line.start)
title = "%s (%s) %s ADR Script" % (char_lines[0].title, character_name, n) title = "%s (%s) %s ADR Script" % (char_lines[0].title,
filename = "%s_%s_%s_ADR Script.pdf" % (char_lines[0].title, n, character_name) character_name, n)
filename = "%s_%s_%s_ADR Script.pdf" % (char_lines[0].title,
n, character_name)
doc = make_doc_template(page_size=letter, filename=filename, document_title=title, doc = make_doc_template(page_size=letter, filename=filename,
document_title=title,
title=char_lines[0].title, title=char_lines[0].title,
document_subheader=char_lines[0].spot, document_subheader=char_lines[0].spot or "",
supervisor=char_lines[0].supervisor, supervisor=char_lines[0].supervisor or "",
client=char_lines[0].client, client=char_lines[0].client or "",
document_header=character_name) document_header=character_name or "")
story = [] story = []
@@ -58,7 +64,8 @@ def output_report(lines: List[ADRLine], tc_display_format: TimecodeFormat, font_
start_tc = tc_display_format.seconds_to_smpte(line.start) start_tc = tc_display_format.seconds_to_smpte(line.start)
finish_tc = tc_display_format.seconds_to_smpte(line.finish) finish_tc = tc_display_format.seconds_to_smpte(line.finish)
data_block = [[Paragraph(line.cue_number, number_style), data_block = [[Paragraph(line.cue_number, number_style),
Paragraph(start_tc + " - " + finish_tc, number_style) Paragraph(start_tc + " - " + finish_tc,
number_style)
]] ]]
# RIGHTWARDS ARROW → # RIGHTWARDS ARROW →

View File

@@ -1,3 +1,9 @@
"""
Reporting logic. These methods provide reporting methods to the package and
take some pains to provide nice-looking escape codes if we're writing to a
tty.
"""
import sys import sys
@@ -29,13 +35,15 @@ def print_warning(warning_string):
sys.stderr.write(" - %s\n" % warning_string) sys.stderr.write(" - %s\n" % warning_string)
def print_advisory_tagging_error(failed_string, position, parent_track_name=None, clip_time=None): def print_advisory_tagging_error(failed_string, position,
parent_track_name=None, clip_time=None):
if sys.stderr.isatty(): if sys.stderr.isatty():
sys.stderr.write("\n") sys.stderr.write("\n")
sys.stderr.write(" ! \033[33;1mTagging error: \033[0m") sys.stderr.write(" ! \033[33;1mTagging error: \033[0m")
ok_string = failed_string[:position] ok_string = failed_string[:position]
not_ok_string = failed_string[position:] not_ok_string = failed_string[position:]
sys.stderr.write("\033[32m\"%s\033[31;1m%s\"\033[0m\n" % (ok_string, not_ok_string)) sys.stderr.write("\033[32m\"%s\033[31;1m%s\"\033[0m\n" %
(ok_string, not_ok_string))
if parent_track_name is not None: if parent_track_name is not None:
sys.stderr.write(" ! > On track \"%s\"\n" % parent_track_name) sys.stderr.write(" ! > On track \"%s\"\n" % parent_track_name)

View File

@@ -1,3 +1,7 @@
"""
Validation logic for enforcing various consistency rules.
"""
from dataclasses import dataclass from dataclasses import dataclass
from ptulsconv.docparser.adr_entity import ADRLine from ptulsconv.docparser.adr_entity import ADRLine
from typing import Iterator, Optional from typing import Iterator, Optional
@@ -10,15 +14,20 @@ class ValidationError:
def report_message(self): def report_message(self):
if self.event is not None: if self.event is not None:
return f"{self.message}: event at {self.event.start} with number {self.event.cue_number}" return (f"{self.message}: event at {self.event.start} with number"
"{self.event.cue_number}")
else: else:
return self.message return self.message
def validate_unique_count(input_lines: Iterator[ADRLine], field='title', count=1): def validate_unique_count(input_lines: Iterator[ADRLine], field='title',
count=1):
values = set(list(map(lambda e: getattr(e, field), input_lines))) values = set(list(map(lambda e: getattr(e, field), input_lines)))
if len(values) > count: if len(values) > count:
yield ValidationError(message="Field {} has too many values (max={}): {}".format(field, count, values)) yield ValidationError(
message="Field {} has too many values (max={}): {}"
.format(field, count, values)
)
def validate_value(input_lines: Iterator[ADRLine], key_field, predicate): def validate_value(input_lines: Iterator[ADRLine], key_field, predicate):
@@ -29,7 +38,8 @@ def validate_value(input_lines: Iterator[ADRLine], key_field, predicate):
event=event) event=event)
def validate_unique_field(input_lines: Iterator[ADRLine], field='cue_number', scope=None): def validate_unique_field(input_lines: Iterator[ADRLine], field='cue_number',
scope=None):
values = dict() values = dict()
for event in input_lines: for event in input_lines:
this = getattr(event, field) this = getattr(event, field)
@@ -40,26 +50,31 @@ def validate_unique_field(input_lines: Iterator[ADRLine], field='cue_number', sc
values.setdefault(key, set()) values.setdefault(key, set())
if this in values[key]: if this in values[key]:
yield ValidationError(message='Re-used {}'.format(field), event=event) yield ValidationError(message='Re-used {}'.format(field),
event=event)
else: else:
values[key].update(this) values[key].update(this)
def validate_non_empty_field(input_lines: Iterator[ADRLine], field='cue_number'): def validate_non_empty_field(input_lines: Iterator[ADRLine],
field='cue_number'):
for event in input_lines: for event in input_lines:
if getattr(event, field, None) is None: if getattr(event, field, None) is None:
yield ValidationError(message='Empty field {}'.format(field), event=event) yield ValidationError(message='Empty field {}'.format(field),
event=event)
def validate_dependent_value(input_lines: Iterator[ADRLine], key_field, dependent_field): def validate_dependent_value(input_lines: Iterator[ADRLine], key_field,
dependent_field):
""" """
Validates that two events with the same value in `key_field` always have the Validates that two events with the same value in `key_field` always have
same value in `dependent_field` the same value in `dependent_field`
""" """
key_values = set((getattr(x, key_field) for x in input_lines)) key_values = set((getattr(x, key_field) for x in input_lines))
for key_value in key_values: for key_value in key_values:
rows = [(getattr(x, key_field), getattr(x, dependent_field)) for x in input_lines rows = [(getattr(x, key_field), getattr(x, dependent_field))
for x in input_lines
if getattr(x, key_field) == key_value] if getattr(x, key_field) == key_value]
unique_rows = set(rows) unique_rows = set(rows)
if len(unique_rows) > 1: if len(unique_rows) > 1:

View File

@@ -12,7 +12,10 @@ import ptulsconv
from ptulsconv.docparser.adr_entity import ADRLine from ptulsconv.docparser.adr_entity import ADRLine
# TODO Get a third-party test for Avid Marker lists # TODO Get a third-party test for Avid Marker lists
def avid_marker_list(lines: List[ADRLine], report_date=datetime.datetime.now(), reel_start_frame=0, fps=24):
def avid_marker_list(lines: List[ADRLine], report_date=datetime.datetime.now(),
reel_start_frame=0, fps=24):
doc = TreeBuilder(element_factory=None) doc = TreeBuilder(element_factory=None)
doc.start('Avid:StreamItems', {'xmlns:Avid': 'http://www.avid.com'}) doc.start('Avid:StreamItems', {'xmlns:Avid': 'http://www.avid.com'})
@@ -48,26 +51,35 @@ def avid_marker_list(lines: List[ADRLine], report_date=datetime.datetime.now(),
for line in lines: for line in lines:
doc.start('AvClass', {'id': 'ATTR'}) doc.start('AvClass', {'id': 'ATTR'})
doc.start('AvProp', {'id': 'ATTR', 'name': '__OMFI:ATTR:NumItems', 'type': 'int32'}) doc.start('AvProp', {'id': 'ATTR',
'name': '__OMFI:ATTR:NumItems',
'type': 'int32'})
doc.data('7') doc.data('7')
doc.end('AvProp') doc.end('AvProp')
doc.start('List', {'id': 'OMFI:ATTR:AttrRefs'}) doc.start('List', {'id': 'OMFI:ATTR:AttrRefs'})
insert_elem('1', 'OMFI:ATTB:IntAttribute', 'int32', '_ATN_CRM_LONG_CREATE_DATE', report_date.strftime("%s")) insert_elem('1', 'OMFI:ATTB:IntAttribute', 'int32',
insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string', '_ATN_CRM_COLOR', 'yellow') '_ATN_CRM_LONG_CREATE_DATE', report_date.strftime("%s"))
insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string', '_ATN_CRM_USER', line.supervisor or "") insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string',
'_ATN_CRM_COLOR', 'yellow')
insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string',
'_ATN_CRM_USER', line.supervisor or "")
marker_name = "%s: %s" % (line.cue_number, line.prompt) marker_name = "%s: %s" % (line.cue_number, line.prompt)
insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string', '_ATN_CRM_COM', marker_name) insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string',
'_ATN_CRM_COM', marker_name)
start_frame = int(line.start * fps) start_frame = int(line.start * fps)
insert_elem('2', "OMFI:ATTB:StringAttribute", 'string', '_ATN_CRM_TC', insert_elem('2', "OMFI:ATTB:StringAttribute", 'string',
'_ATN_CRM_TC',
str(start_frame - reel_start_frame)) str(start_frame - reel_start_frame))
insert_elem('2', "OMFI:ATTB:StringAttribute", 'string', '_ATN_CRM_TRK', 'V1') insert_elem('2', "OMFI:ATTB:StringAttribute", 'string',
insert_elem('1', "OMFI:ATTB:IntAttribute", 'int32', '_ATN_CRM_LENGTH', '1') '_ATN_CRM_TRK', 'V1')
insert_elem('1', "OMFI:ATTB:IntAttribute", 'int32',
'_ATN_CRM_LENGTH', '1')
doc.start('ListElem', {}) doc.start('ListElem', {})
doc.end('ListElem') doc.end('ListElem')
@@ -82,17 +94,22 @@ def avid_marker_list(lines: List[ADRLine], report_date=datetime.datetime.now(),
def dump_fmpxml(data, input_file_name, output, adr_field_map): def dump_fmpxml(data, input_file_name, output, adr_field_map):
doc = TreeBuilder(element_factory=None) doc = TreeBuilder(element_factory=None)
doc.start('FMPXMLRESULT', {'xmlns': 'http://www.filemaker.com/fmpxmlresult'}) doc.start('FMPXMLRESULT', {'xmlns':
'http://www.filemaker.com/fmpxmlresult'})
doc.start('ERRORCODE', {}) doc.start('ERRORCODE', {})
doc.data('0') doc.data('0')
doc.end('ERRORCODE') doc.end('ERRORCODE')
doc.start('PRODUCT', {'NAME': ptulsconv.__name__, 'VERSION': ptulsconv.__version__}) doc.start('PRODUCT', {'NAME': ptulsconv.__name__,
'VERSION': ptulsconv.__version__})
doc.end('PRODUCT') doc.end('PRODUCT')
doc.start('DATABASE', {'DATEFORMAT': 'MM/dd/yy', 'LAYOUT': 'summary', 'TIMEFORMAT': 'hh:mm:ss', doc.start('DATABASE', {'DATEFORMAT': 'MM/dd/yy',
'RECORDS': str(len(data['events'])), 'NAME': os.path.basename(input_file_name)}) 'LAYOUT': 'summary',
'TIMEFORMAT': 'hh:mm:ss',
'RECORDS': str(len(data['events'])),
'NAME': os.path.basename(input_file_name)})
doc.end('DATABASE') doc.end('DATABASE')
doc.start('METADATA', {}) doc.start('METADATA', {})
@@ -102,7 +119,8 @@ def dump_fmpxml(data, input_file_name, output, adr_field_map):
if tp is int or tp is float: if tp is int or tp is float:
ft = 'NUMBER' ft = 'NUMBER'
doc.start('FIELD', {'EMPTYOK': 'YES', 'MAXREPEAT': '1', 'NAME': field[1], 'TYPE': ft}) doc.start('FIELD', {'EMPTYOK': 'YES', 'MAXREPEAT': '1',
'NAME': field[1], 'TYPE': ft})
doc.end('FIELD') doc.end('FIELD')
doc.end('METADATA') doc.end('METADATA')
@@ -157,7 +175,8 @@ def fmp_transformed_dump(data, input_file, xsl_name, output, adr_field_map):
print_status_style("Running xsltproc") print_status_style("Running xsltproc")
xsl_path = os.path.join(pathlib.Path(__file__).parent.absolute(), 'xslt', xsl_name + ".xsl") xsl_path = os.path.join(pathlib.Path(__file__).parent.absolute(), 'xslt',
xsl_name + ".xsl")
print_status_style("Using xsl: %s" % xsl_path) print_status_style("Using xsl: %s" % xsl_path)
subprocess.run(['xsltproc', xsl_path, '-'], subprocess.run(['xsltproc', xsl_path, '-'],
input=str_data, text=True, input=str_data, text=True,

52
pyproject.toml Normal file
View File

@@ -0,0 +1,52 @@
[project]
name = "ptulsconv"
license = { file = "LICENSE" }
classifiers = [
'License :: OSI Approved :: MIT License',
'Topic :: Multimedia',
'Topic :: Multimedia :: Sound/Audio',
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 5 - Production/Stable",
"Topic :: Text Processing :: Filters"
]
requires-python = ">=3.8"
keywords = ["text-processing", "parsers", "film",
"broadcast", "editing", "editorial"]
[tool.poetry]
name = "ptulsconv"
version = "2.2.4"
description = "Read Pro Tools Text exports and generate PDF ADR Reports, JSON"
authors = ["Jamie Hardt <jamiehardt@me.com>"]
license = "MIT"
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.8"
parsimonious = "^0.10.0"
tqdm = "^4.67.1"
reportlab = "^4.4.1"
py-ptsl = "^101.1.0"
sphinx_rtd_theme = {version= '>= 1.1.1', optional=true}
sphinx = {version= '>= 5.3.0', optional=true}
[tool.poetry.extras]
doc = ['sphinx', 'sphinx_rtd_theme']
[tool.poetry.scripts]
ptulsconv = 'ptulsconv.__main__:main'
[project.urls]
Source = 'https://github.com/iluvcapra/ptulsconv'
Issues = 'https://github.com/iluvcapra/ptulsconv/issues'
Documentation = 'https://ptulsconv.readthedocs.io/'
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -1,15 +0,0 @@
astroid==2.9.3
isort==5.10.1
lazy-object-proxy==1.7.1
mccabe==0.6.1
parsimonious==0.9.0
Pillow==9.1.1
platformdirs==2.4.1
pylint==2.12.2
regex==2022.6.2
reportlab==3.6.10
six==1.16.0
toml==0.10.2
tqdm==4.64.0
typing_extensions==4.0.1
wrapt==1.13.3

View File

@@ -1,43 +0,0 @@
from setuptools import setup
from ptulsconv import __author__, __license__, __version__
with open("README.md", "r") as fh:
long_description = fh.read()
setup(name='ptulsconv',
version=__version__,
author=__author__,
description='Parse and convert Pro Tools text exports',
long_description_content_type="text/markdown",
long_description=long_description,
license=__license__,
url='https://github.com/iluvcapra/ptulsconv',
project_urls={
'Source':
'https://github.com/iluvcapra/ptulsconv',
'Issues':
'https://github.com/iluvcapra/ptulsconv/issues',
},
classifiers=[
'License :: OSI Approved :: MIT License',
'Topic :: Multimedia',
'Topic :: Multimedia :: Sound/Audio',
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Development Status :: 4 - Beta",
"Topic :: Text Processing :: Filters"],
packages=['ptulsconv'],
keywords='text-processing parsers film tv editing editorial',
install_requires=['parsimonious', 'tqdm', 'reportlab'],
package_data={
"ptulsconv": ["xslt/*.xsl"]
},
entry_points={
'console_scripts': [
'ptulsconv = ptulsconv.__main__:main'
]
}
)

View File

@@ -1,4 +0,0 @@
#!/bin/bash
coverage run -m pytest . ; coverage-lcov

View File

@@ -0,0 +1,24 @@
SESSION NAME: Test for ptulsconv
SAMPLE RATE: 48000.000000
BIT DEPTH: 24-bit
SESSION START TIMECODE: 00:00:00:00
TIMECODE FORMAT: 23.976 Frame
# OF AUDIO TRACKS: 1
# OF AUDIO CLIPS: 0
# OF AUDIO FILES: 0
T R A C K L I S T I N G
TRACK NAME: Hamlet
COMMENTS: {Actor=Laurence Olivier}
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 Test Line 1 $QN=T1001 00:00:00:00 00:00:02:00 00:00:02:00 Unmuted
1 2 Test Line 2 $QN=T1002 00:00:04:00 00:00:06:00 00:00:02:00 Unmuted
M A R K E R S L I S T I N G
# LOCATION TIME REFERENCE UNITS NAME TRACK NAME TRACK TYPE COMMENTS
1 00:00:00:00 0 Samples {Title=Multiple Marker Rulers Project} Markers Ruler
2 00:00:04:00 192192 Samples Track Marker Hamlet Track

View File

@@ -7,8 +7,8 @@ class TestRobinHood1(unittest.TestCase):
path = os.path.dirname(__file__) + '/../export_cases/Robin Hood Spotting.txt' path = os.path.dirname(__file__) + '/../export_cases/Robin Hood Spotting.txt'
def test_header_export(self): def test_header_export(self):
with open(self.path,"r") as file:
session = parse_document(self.path) session = parse_document(file.read())
self.assertIsNotNone(session.header) self.assertIsNotNone(session.header)
self.assertEqual(session.header.session_name, 'Robin Hood Spotting') self.assertEqual(session.header.session_name, 'Robin Hood Spotting')
@@ -19,7 +19,8 @@ class TestRobinHood1(unittest.TestCase):
def test_all_sections(self): def test_all_sections(self):
session = parse_document(self.path) with open(self.path,"r") as file:
session = parse_document(file.read())
self.assertIsNotNone(session.header) self.assertIsNotNone(session.header)
self.assertIsNotNone(session.files) self.assertIsNotNone(session.files)
@@ -30,7 +31,8 @@ class TestRobinHood1(unittest.TestCase):
def test_tracks(self): def test_tracks(self):
session = parse_document(self.path) with open(self.path,"r") as file:
session = parse_document(file.read())
self.assertEqual(len(session.tracks), 14) self.assertEqual(len(session.tracks), 14)
self.assertListEqual(["Scenes", "Robin", "Will", "Marian", "John", self.assertListEqual(["Scenes", "Robin", "Will", "Marian", "John",
@@ -54,7 +56,10 @@ class TestRobinHood1(unittest.TestCase):
list(map(lambda t: t.comments, session.tracks))) list(map(lambda t: t.comments, session.tracks)))
def test_a_track(self): def test_a_track(self):
session = parse_document(self.path)
with open(self.path,"r") as file:
session = parse_document(file.read())
guy_track = session.tracks[5] guy_track = session.tracks[5]
self.assertEqual(guy_track.name, 'Guy') self.assertEqual(guy_track.name, 'Guy')
self.assertEqual(guy_track.comments, '[ADR] {Actor=Basil Rathbone} $CN=5') self.assertEqual(guy_track.comments, '[ADR] {Actor=Basil Rathbone} $CN=5')
@@ -71,7 +76,8 @@ class TestRobinHood1(unittest.TestCase):
self.assertEqual(guy_track.clips[5].state, 'Unmuted') self.assertEqual(guy_track.clips[5].state, 'Unmuted')
def test_memory_locations(self): def test_memory_locations(self):
session = parse_document(self.path) with open(self.path,"r") as file:
session = parse_document(file.read())
self.assertEqual(len(session.markers), 1) self.assertEqual(len(session.markers), 1)
self.assertEqual(session.markers[0].number, 1) self.assertEqual(session.markers[0].number, 1)

View File

@@ -7,23 +7,30 @@ class TestRobinHood5(unittest.TestCase):
path = os.path.dirname(__file__) + '/../export_cases/Robin Hood Spotting5.txt' path = os.path.dirname(__file__) + '/../export_cases/Robin Hood Spotting5.txt'
def test_skipped_segments(self): def test_skipped_segments(self):
session = parse_document(self.path) with open(self.path,"r") as file:
session = parse_document(file.read())
self.assertIsNone(session.files) self.assertIsNone(session.files)
self.assertIsNone(session.clips) self.assertIsNone(session.clips)
def test_plugins(self): def test_plugins(self):
session = parse_document(self.path) with open(self.path,"r") as file:
session = parse_document(file.read())
self.assertEqual(len(session.plugins), 2) self.assertEqual(len(session.plugins), 2)
def test_stereo_track(self): def test_stereo_track(self):
session = parse_document(self.path) with open(self.path,"r") as file:
session = parse_document(file.read())
self.assertEqual(session.tracks[1].name, 'MX WT (Stereo)') self.assertEqual(session.tracks[1].name, 'MX WT (Stereo)')
self.assertEqual(len(session.tracks[1].clips), 2) self.assertEqual(len(session.tracks[1].clips), 2)
self.assertEqual(session.tracks[1].clips[0].clip_name, 'RobinHood.1-01.L') self.assertEqual(session.tracks[1].clips[0].clip_name, 'RobinHood.1-01.L')
self.assertEqual(session.tracks[1].clips[1].clip_name, 'RobinHood.1-01.R') self.assertEqual(session.tracks[1].clips[1].clip_name, 'RobinHood.1-01.R')
def test_a_track(self): def test_a_track(self):
session = parse_document(self.path) with open(self.path,"r") as file:
session = parse_document(file.read())
guy_track = session.tracks[8] guy_track = session.tracks[8]
self.assertEqual(guy_track.name, 'Guy') self.assertEqual(guy_track.name, 'Guy')

View File

@@ -7,7 +7,9 @@ class TestRobinHood6(unittest.TestCase):
path = os.path.dirname(__file__) + '/../export_cases/Robin Hood Spotting6.txt' path = os.path.dirname(__file__) + '/../export_cases/Robin Hood Spotting6.txt'
def test_a_track(self): def test_a_track(self):
session = parse_document(self.path) with open(self.path, "r") as file:
session = parse_document(file.read())
marian_track = session.tracks[6] marian_track = session.tracks[6]
self.assertEqual(marian_track.name, 'Marian') self.assertEqual(marian_track.name, 'Marian')

View File

@@ -7,11 +7,16 @@ class TestRobinHoodDF(unittest.TestCase):
path = os.path.dirname(__file__) + '/../export_cases/Robin Hood SpottingDF.txt' path = os.path.dirname(__file__) + '/../export_cases/Robin Hood SpottingDF.txt'
def test_header_export_df(self): def test_header_export_df(self):
session = parse_document(self.path)
with open(self.path, "r") as file:
session = parse_document(file.read())
self.assertEqual(session.header.timecode_drop_frame, True) self.assertEqual(session.header.timecode_drop_frame, True)
def test_a_track(self): def test_a_track(self):
session = parse_document(self.path)
with open(self.path, "r") as file:
session = parse_document(file.read())
guy_track = session.tracks[4] guy_track = session.tracks[4]
self.assertEqual(guy_track.name, 'Robin') self.assertEqual(guy_track.name, 'Robin')

View File

@@ -2,33 +2,52 @@ import unittest
import tempfile import tempfile
import sys
import os.path import os.path
import os import os
import glob import glob
from ptulsconv import commands from ptulsconv import commands
class TestBroadcastTimecode(unittest.TestCase):
class TestPDFExport(unittest.TestCase):
def test_report_generation(self): def test_report_generation(self):
""" """
Setp through every text file in export_cases and make sure it can Setp through every text file in export_cases and make sure it can
be converted into PDF docs without throwing an error be converted into PDF docs without throwing an error
""" """
files = [os.path.dirname(__file__) + "/../export_cases/Robin Hood Spotting.txt"] files = []
#files.append(os.path.dirname(__file__) + "/../export_cases/Robin Hood Spotting2.txt") files = [os.path.dirname(__file__) +
"/../export_cases/Robin Hood Spotting.txt"]
for path in files: for path in files:
tempdir = tempfile.TemporaryDirectory() tempdir = tempfile.TemporaryDirectory()
os.chdir(tempdir.name) os.chdir(tempdir.name)
try: try:
commands.convert(path, major_mode='doc') commands.convert(input_file=path, major_mode='doc')
except: except Exception as e:
assert False, "Error processing file %s" % path print("Error in test_report_generation")
print(f"File: {path}")
print(repr(e))
raise e
finally:
tempdir.cleanup()
def test_report_generation_track_markers(self):
files = []
files.append(os.path.dirname(__file__) +
"/../export_cases/Test for ptulsconv.txt")
for path in files:
tempdir = tempfile.TemporaryDirectory()
os.chdir(tempdir.name)
try:
commands.convert(input_file=path, major_mode='doc')
except Exception as e:
print("Error in test_report_generation_track_markers")
print(f"File: {path}")
print(repr(e))
raise e
finally: finally:
tempdir.cleanup() tempdir.cleanup()
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -70,6 +70,16 @@ class TestBroadcastTimecode(unittest.TestCase):
s1 = tc_format.seconds_to_smpte(secs) s1 = tc_format.seconds_to_smpte(secs)
self.assertEqual(s1, "00:00:01:01") self.assertEqual(s1, "00:00:01:01")
def test_unparseable_footage(self):
time_str = "10.1"
s1 = broadcast_timecode.footage_to_frame_count(time_str)
self.assertIsNone(s1)
def test_unparseable_timecode(self):
time_str = "11.32-19"
s1 = broadcast_timecode.smpte_to_frame_count(time_str, frames_per_logical_second=24)
self.assertIsNone(s1)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -88,7 +88,9 @@ class TestTagCompiler(unittest.TestCase):
state='Unmuted', state='Unmuted',
timestamp=None), timestamp=None),
] ]
test_track = doc_entity.TrackDescriptor(name="Track 1 [A] {Color=Red} $Mode=1", test_track = doc_entity.TrackDescriptor(
index=0,
name="Track 1 [A] {Color=Red} $Mode=1",
comments="{Comment=This is some text in the comments}", comments="{Comment=This is some text in the comments}",
user_delay_samples=0, user_delay_samples=0,
plugins=[], plugins=[],
@@ -100,14 +102,14 @@ class TestTagCompiler(unittest.TestCase):
time_reference=48000 * 3600, time_reference=48000 * 3600,
units="Samples", units="Samples",
name="Marker 1 {Part=1}", name="Marker 1 {Part=1}",
comments="" comments="", track_marker=False,
), ),
doc_entity.MarkerDescriptor(number=2, doc_entity.MarkerDescriptor(number=2,
location="01:00:01:00", location="01:00:01:00",
time_reference=48000 * 3601, time_reference=48000 * 3601,
units="Samples", units="Samples",
name="Marker 2 {Part=2}", name="Marker 2 {Part=2}",
comments="[M1]" comments="[M1]", track_marker=False,
), ),
] ]

View File

@@ -1,5 +1,5 @@
import unittest import unittest
from ptulsconv.docparser import doc_entity, doc_parser_visitor, ptuls_grammar, tag_compiler from ptulsconv.docparser import doc_entity, pt_doc_parser, tag_compiler
import os.path import os.path
@@ -8,8 +8,8 @@ class TaggingIntegratedTests(unittest.TestCase):
def test_event_list(self): def test_event_list(self):
with open(self.path, 'r') as f: with open(self.path, 'r') as f:
document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read()) document_ast = pt_doc_parser.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast) document: doc_entity.SessionDescriptor = pt_doc_parser.DocParserVisitor().visit(document_ast)
compiler = tag_compiler.TagCompiler() compiler = tag_compiler.TagCompiler()
compiler.session = document compiler.session = document
@@ -28,8 +28,8 @@ class TaggingIntegratedTests(unittest.TestCase):
def test_append(self): def test_append(self):
with open(self.path, 'r') as f: with open(self.path, 'r') as f:
document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read()) document_ast = pt_doc_parser.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast) document: doc_entity.SessionDescriptor = pt_doc_parser.DocParserVisitor().visit(document_ast)
compiler = tag_compiler.TagCompiler() compiler = tag_compiler.TagCompiler()
compiler.session = document compiler.session = document
@@ -51,8 +51,8 @@ class TaggingIntegratedTests(unittest.TestCase):
def test_successive_appends(self): def test_successive_appends(self):
with open(self.path, 'r') as f: with open(self.path, 'r') as f:
document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read()) document_ast = pt_doc_parser.protools_text_export_grammar.parse(f.read())
document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast) document: doc_entity.SessionDescriptor = pt_doc_parser.DocParserVisitor().visit(document_ast)
compiler = tag_compiler.TagCompiler() compiler = tag_compiler.TagCompiler()
compiler.session = document compiler.session = document