Compare commits
14 Commits
v0.1
...
08ad33e27d
| Author | SHA1 | Date | |
|---|---|---|---|
| 08ad33e27d | |||
| e3f4505d12 | |||
| 8964bb030b | |||
| f7c9def9bf | |||
|
|
381ec6f820 | ||
|
|
79fa79e706 | ||
|
|
b3b960c1da | ||
|
|
85470ac367 | ||
|
|
37f1c70e57 | ||
|
|
526b798e02 | ||
|
|
877c0aeaf0 | ||
|
|
4e0b34edfe | ||
|
|
2169fbb994 | ||
|
|
966eaecbbd |
1
.python-version
Normal file
1
.python-version
Normal file
@@ -0,0 +1 @@
|
||||
3.11
|
||||
29
LICENSE
Normal file
29
LICENSE
Normal file
@@ -0,0 +1,29 @@
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2020, Jamie Hardt
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
42
README.md
42
README.md
@@ -1,7 +1,16 @@
|
||||
# soundobjects Blender Add-On
|
||||
|
||||
This add-on adds three operators for working with immersive 3D audio in [Blender][blender], specifically it allows you to create ADM Broadcast
|
||||
WAVE files for use with [Dolby Atmos][atmos] or other object-based sound mixing workflows.
|
||||
**NOTE**: _Avid made some changes to ADM file import in Pro Tools and it no
|
||||
longer accepts ADMs made by this plugin. It may still work with other DAWs._
|
||||
|
||||
This add-on adds three operators for working with immersive 3D audio in
|
||||
[Blender][blender], specifically it allows you to create ADM Broadcast WAVE
|
||||
files for use with [Dolby Atmos][atmos] or other object-based sound mixing
|
||||
workflows.
|
||||
|
||||
[Here](https://vimeo.com/464569386) you can see a short demo of how to add
|
||||
sounds to an animated Blender scene and import the resulting file into Pro
|
||||
Tools and then play them into a Dolby DAPS Renderer.
|
||||
|
||||
[blender]: https://www.blender.org
|
||||
[atmos]: https://www.dolby.com/technologies/dolby-atmos/
|
||||
@@ -10,25 +19,32 @@ WAVE files for use with [Dolby Atmos][atmos] or other object-based sound mixing
|
||||
|
||||
### `import_test.wav_file_batch`
|
||||
|
||||
**Import WAV Files:** This operator allows you to add multiple audio files to a .blend file so they'll be available to
|
||||
the *Add Sounds to Meshes* operator.
|
||||
**Import WAV Files:** This operator allows you to add multiple audio files to a
|
||||
.blend file so they'll be available to the *Add Sounds to Meshes* operator.
|
||||
|
||||
### `object.add_speakers_to_obj`
|
||||
|
||||
**Add Sounds to Meshes:** This operator takes all the selected objects in the current scene and attaches a new speaker
|
||||
locked to that object's location throughout the animation. You provide the prefix for the name of a set of sound files
|
||||
added with the _Import WAV Files_ operator, and these are added to each selected object randomly. The sounds can be
|
||||
timed to either begin playing at the beginning of the sequence, at a random time, or when the respective object is
|
||||
**Add Sounds to Meshes:** This operator takes all the selected objects in the
|
||||
current scene and attaches a new speaker locked to that object's location
|
||||
throughout the animation. You provide the prefix for the name of a set of sound
|
||||
files
|
||||
added with the _Import WAV Files_ operator, and these are added to each
|
||||
selected object randomly. The sounds can be timed to either begin playing at
|
||||
the beginning of the sequence, at a random time, or when the respective object
|
||||
is
|
||||
closest to the scene's camera.
|
||||
|
||||
### `export.adm_wave_file`
|
||||
|
||||
**Export ADM Wave File:** This operator exports all of the speakers in a scene as an ADM Broadcast-WAV file compartible
|
||||
with a Dolby Atmos rendering workflow. This produces a multichannel WAV file with embedded ADM metadata the passes
|
||||
panning information to the client. (Has been tested and works with Avid Pro Tools 2020).
|
||||
**Export ADM Wave File:** This operator exports all of the speakers in a scene
|
||||
as an ADM Broadcast-WAV file compartible with a Dolby Atmos rendering workflow.
|
||||
This produces a multichannel WAV file with embedded ADM metadata the passes
|
||||
panning information to the client. (Has been tested and works with Avid Pro
|
||||
Tools 2020).
|
||||
|
||||
|
||||
## Important Note
|
||||
## Requirements
|
||||
|
||||
This add-on requires that the [EBU Audio Renderer](https://github.com/ebu/ebu_adm_renderer) (`ear` v2.0) Python package
|
||||
This add-on requires that the [EBU Audio
|
||||
Renderer](https://github.com/ebu/ebu_adm_renderer) (`ear` v2.0) Python package
|
||||
be installed to Blender's Python.
|
||||
|
||||
18
__init__.py
18
__init__.py
@@ -10,13 +10,25 @@ bl_info = {
|
||||
"author": "Jamie Hardt",
|
||||
"version": (0, 1),
|
||||
"warning": "Requires `ear` EBU ADM Renderer package to be installed",
|
||||
"blender": (2, 90, 0),
|
||||
"blender": (2, 93, 1),
|
||||
"category": "Import-Export",
|
||||
"support": "TESTING",
|
||||
"tracker_url": "https://github.com/iluvcapra/soundobjects_blender_addon/issues",
|
||||
"wiki_url": ""
|
||||
}
|
||||
|
||||
# class SoundObjectAttachmentPanel(bpy.types.Panel):
|
||||
# bl_idname = "OBJECT_PT_sound_object_attachment_panel"
|
||||
# bl_space_type = "VIEW_3D"
|
||||
# bl_label = "Attach Sounds"
|
||||
# bl_region_type = "UI"
|
||||
# bl_category = "Tools"
|
||||
# bl_context = "object"
|
||||
# bl_options = {"DEFAULT_CLOSED"}
|
||||
|
||||
# def draw(self, context):
|
||||
# self.layout.label(text="Attach Sounds")
|
||||
|
||||
|
||||
def import_wav_menu_callback(self, context):
|
||||
self.layout.operator(ImportWav.bl_idname, text="WAV Audio Files (.wav)")
|
||||
@@ -39,7 +51,7 @@ def register():
|
||||
bpy.types.TOPBAR_MT_file_export.append(export_adm_menu_callback)
|
||||
bpy.types.VIEW3D_MT_object.append(add_sound_to_mesh_menu_callback)
|
||||
|
||||
bpy.utils.register_class(SoundObjectAttachmentPanel)
|
||||
# bpy.utils.register_class(SoundObjectAttachmentPanel)
|
||||
|
||||
|
||||
def unregister():
|
||||
@@ -51,4 +63,4 @@ def unregister():
|
||||
bpy.types.TOPBAR_MT_file_export.remove(export_adm_menu_callback)
|
||||
bpy.types.VIEW3D_MT_object.remove(add_sound_to_mesh_menu_callback)
|
||||
|
||||
bpy.utils.unregister_class(SoundObjectAttachmentPanel)
|
||||
# bpy.utils.unregister_class(SoundObjectAttachmentPanel)
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
import bpy
|
||||
from numpy.linalg import norm
|
||||
from numpy.typing import ArrayLike
|
||||
from random import uniform, gauss
|
||||
from math import floor
|
||||
from enum import Enum
|
||||
from typing import cast
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
@@ -27,7 +29,8 @@ class SpatialEnvelope:
|
||||
exits_range: int
|
||||
|
||||
|
||||
def sound_camera_spatial_envelope(scene: bpy.types.Scene, speaker_obj, considered_range: float) -> SpatialEnvelope:
|
||||
def sound_camera_spatial_envelope(scene: bpy.types.Scene, speaker_obj,
|
||||
considered_range: float) -> SpatialEnvelope:
|
||||
min_dist = sys.float_info.max
|
||||
min_dist_frame = scene.frame_start
|
||||
enters_range_frame = None
|
||||
@@ -35,8 +38,10 @@ def sound_camera_spatial_envelope(scene: bpy.types.Scene, speaker_obj, considere
|
||||
|
||||
in_range = False
|
||||
for frame in range(scene.frame_start, scene.frame_end + 1):
|
||||
assert scene.camera
|
||||
scene.frame_set(frame)
|
||||
rel = speaker_obj.matrix_world.to_translation() - scene.camera.matrix_world.to_translation()
|
||||
rel = speaker_obj.matrix_world.to_translation() \
|
||||
- scene.camera.matrix_world.to_translation()
|
||||
dist = norm(rel)
|
||||
|
||||
if dist < considered_range and not in_range:
|
||||
@@ -44,7 +49,7 @@ def sound_camera_spatial_envelope(scene: bpy.types.Scene, speaker_obj, considere
|
||||
in_range = True
|
||||
|
||||
if dist < min_dist:
|
||||
min_dist = dist
|
||||
min_dist = float(dist)
|
||||
min_dist_frame = frame
|
||||
|
||||
if dist > considered_range and in_range:
|
||||
@@ -52,6 +57,9 @@ def sound_camera_spatial_envelope(scene: bpy.types.Scene, speaker_obj, considere
|
||||
in_range = False
|
||||
break
|
||||
|
||||
assert enters_range_frame
|
||||
assert exits_range_frame
|
||||
|
||||
return SpatialEnvelope(considered_range=considered_range,
|
||||
enters_range=enters_range_frame,
|
||||
exits_range=exits_range_frame,
|
||||
@@ -59,13 +67,20 @@ def sound_camera_spatial_envelope(scene: bpy.types.Scene, speaker_obj, considere
|
||||
min_distance=min_dist)
|
||||
|
||||
|
||||
def closest_approach_to_camera(scene, speaker_object):
|
||||
def closest_approach_to_camera(scene: bpy.types.Scene,
|
||||
speaker_object: bpy.types.Object) -> tuple[float, int]:
|
||||
"""
|
||||
Steps through the scene frame-by-frame and returns a tuple of
|
||||
(minumum_distance, at_frame_index)
|
||||
"""
|
||||
max_dist = sys.float_info.max
|
||||
at_time = scene.frame_start
|
||||
for frame in range(scene.frame_start, scene.frame_end + 1):
|
||||
assert scene.camera
|
||||
scene.frame_set(frame)
|
||||
rel = speaker_object.matrix_world.to_translation() - scene.camera.matrix_world.to_translation()
|
||||
dist = norm(rel)
|
||||
rel = speaker_object.matrix_world.to_translation() - \
|
||||
scene.camera.matrix_world.to_translation()
|
||||
dist = float(norm(cast(ArrayLike, rel)))
|
||||
|
||||
if dist < max_dist:
|
||||
max_dist = dist
|
||||
@@ -74,7 +89,7 @@ def closest_approach_to_camera(scene, speaker_object):
|
||||
return (max_dist, at_time)
|
||||
|
||||
|
||||
def track_speaker_to_camera(speaker, camera):
|
||||
def track_speaker_to_camera(speaker):
|
||||
camera_lock = speaker.constraints.new('TRACK_TO')
|
||||
camera_lock.target = bpy.context.scene.camera
|
||||
camera_lock.use_target_z = True
|
||||
@@ -89,7 +104,8 @@ def spot_audio(context, speaker, trigger_mode, sync_peak, sound_peak, sound_leng
|
||||
audio_scene_in = envelope.closest_range
|
||||
|
||||
elif trigger_mode == TriggerMode.RANDOM:
|
||||
audio_scene_in = floor(uniform(context.scene.frame_start, context.scene.frame_end))
|
||||
audio_scene_in = floor(
|
||||
uniform(context.scene.frame_start, context.scene.frame_end))
|
||||
elif trigger_mode == TriggerMode.RANDOM_GAUSSIAN:
|
||||
mean = (context.scene.frame_end - context.scene.frame_start) / 2
|
||||
audio_scene_in = floor(gauss(mean, gaussian_stddev))
|
||||
@@ -127,11 +143,6 @@ def constrain_speaker_to_mesh(speaker_obj, mesh):
|
||||
location_loc.target = mesh
|
||||
location_loc.target = mesh
|
||||
|
||||
|
||||
def apply_gain_envelope(speaker_obj, envelope):
|
||||
pass
|
||||
|
||||
|
||||
def add_speakers_to_meshes(meshes, context, sound=None,
|
||||
sound_name_prefix=None,
|
||||
sync_peak=False,
|
||||
@@ -146,7 +157,8 @@ def add_speakers_to_meshes(meshes, context, sound=None,
|
||||
print("object is not mesh")
|
||||
continue
|
||||
|
||||
envelope = sound_camera_spatial_envelope(context.scene, mesh, considered_range=5.)
|
||||
envelope = sound_camera_spatial_envelope(
|
||||
context.scene, mesh, considered_range=5.)
|
||||
|
||||
speaker_obj = next((spk for spk in context.scene.objects
|
||||
if spk.type == 'SPEAKER' and spk.constraints['Copy Location'].target == mesh), None)
|
||||
@@ -156,7 +168,7 @@ def add_speakers_to_meshes(meshes, context, sound=None,
|
||||
speaker_obj = context.selected_objects[0]
|
||||
|
||||
constrain_speaker_to_mesh(speaker_obj, mesh)
|
||||
track_speaker_to_camera(speaker_obj, context.scene.camera)
|
||||
track_speaker_to_camera(speaker_obj)
|
||||
|
||||
if sound_name_prefix is not None:
|
||||
sound = sound_bank.random_sound()
|
||||
@@ -170,6 +182,4 @@ def add_speakers_to_meshes(meshes, context, sound=None,
|
||||
gaussian_stddev=gaussian_stddev,
|
||||
sound_bank=sound_bank, envelope=envelope)
|
||||
|
||||
apply_gain_envelope(speaker_obj, envelope)
|
||||
|
||||
speaker_obj.data.update_tag()
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
import bpy
|
||||
|
||||
from contextlib import contextmanager
|
||||
|
||||
import lxml
|
||||
import uuid
|
||||
from fractions import Fraction
|
||||
@@ -27,12 +25,24 @@ from .geom_utils import (speaker_active_time_range,
|
||||
speakers_by_min_distance,
|
||||
speakers_by_start_time)
|
||||
|
||||
from .object_mix import (ObjectMix, ObjectMixPool, object_mixes_from_source_groups)
|
||||
from .object_mix import (ObjectMix, ObjectMixPool,
|
||||
object_mixes_from_source_groups)
|
||||
|
||||
from .speaker_utils import (all_speakers)
|
||||
|
||||
|
||||
def group_speakers(speakers, scene) -> List[List[bpy.types.Object]]:
|
||||
"""
|
||||
Accepts a list of speakers and a scene, and returns a list of lists.
|
||||
|
||||
Each list contains a list of speakers which are guaranteed to not have
|
||||
overlapping sounds. Each of the child lists contains a list of speaker
|
||||
objects in ascending order by start time.
|
||||
|
||||
Speakers are allocated to lists on the basis of their minimum distance to
|
||||
the camera according to `speakers_by_min_distance`. Closer sounds will
|
||||
appear on the earliest list if there is no overlap.
|
||||
"""
|
||||
def list_can_accept_speaker(speaker_list, speaker_to_test):
|
||||
test_range = speaker_active_time_range(speaker_to_test)
|
||||
|
||||
@@ -62,7 +72,8 @@ def group_speakers(speakers, scene) -> List[List[bpy.types.Object]]:
|
||||
return ret_val
|
||||
|
||||
|
||||
def adm_for_object(scene, sound_object: ObjectMix, room_size, adm_builder, object_index):
|
||||
def adm_for_object(scene: bpy.types.Scene, sound_object: ObjectMix, room_size,
|
||||
adm_builder, object_index):
|
||||
fps = scene.render.fps
|
||||
frame_start = scene.frame_start
|
||||
frame_end = scene.frame_end
|
||||
@@ -79,7 +90,8 @@ def adm_for_object(scene, sound_object: ObjectMix, room_size, adm_builder, objec
|
||||
created.track_uid.bitDepth = sound_object.bits_per_sample
|
||||
|
||||
|
||||
def adm_for_scene(scene, sound_objects: List[ObjectMix], room_size):
|
||||
def adm_for_scene(scene: bpy.types.Scene, sound_object_mixes: List[ObjectMix],
|
||||
room_size):
|
||||
adm_builder = ADMBuilder()
|
||||
|
||||
frame_start = scene.frame_start
|
||||
@@ -92,8 +104,9 @@ def adm_for_scene(scene, sound_objects: List[ObjectMix], room_size):
|
||||
|
||||
adm_builder.create_content(audioContentName="Objects")
|
||||
|
||||
for object_index, sound_object in enumerate(sound_objects):
|
||||
adm_for_object(scene, sound_object, room_size, adm_builder, object_index)
|
||||
for object_index, sound_object in enumerate(sound_object_mixes):
|
||||
adm_for_object(scene, sound_object, room_size,
|
||||
adm_builder, object_index)
|
||||
|
||||
adm = adm_builder.adm
|
||||
|
||||
@@ -105,23 +118,28 @@ def adm_for_scene(scene, sound_objects: List[ObjectMix], room_size):
|
||||
|
||||
|
||||
def bext_data(scene, sample_rate, room_size):
|
||||
description = "SCENE={};ROOM_SIZE={}\n".format(scene.name, room_size).encode("ascii")
|
||||
originator_name = "Blender {}".format(bpy.app.version_string).encode("ascii")
|
||||
description = "SCENE={};ROOM_SIZE={}\n".format(
|
||||
scene.name, room_size).encode("ascii")
|
||||
originator_name = "Blender {}".format(
|
||||
bpy.app.version_string).encode("ascii")
|
||||
originator_ref = uuid.uuid1().hex.encode("ascii")
|
||||
date10 = strftime("%Y-%m-%d").encode("ascii")
|
||||
time8 = strftime("%H:%M:%S").encode("ascii")
|
||||
timeref = int(float(scene.frame_start) * sample_rate / float(scene.render.fps))
|
||||
timeref = int(float(scene.frame_start) *
|
||||
sample_rate / float(scene.render.fps))
|
||||
version = 0
|
||||
umid = b"\0" * 64
|
||||
pad = b"\0" * 190
|
||||
|
||||
data = struct.pack("<256s32s32s10s8sQH64s190s", description, originator_name,
|
||||
originator_ref, date10, time8, timeref, version, umid, pad)
|
||||
data = struct.pack("<256s32s32s10s8sQH64s190s", description,
|
||||
originator_name, originator_ref, date10, time8, timeref,
|
||||
version, umid, pad)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def attach_outfile_metadata(out_format, outfile, room_size, scene, sound_objects):
|
||||
def attach_outfile_metadata(out_format, outfile, room_size, scene,
|
||||
sound_objects):
|
||||
adm, chna = adm_for_scene(scene, sound_objects, room_size=room_size)
|
||||
outfile.axml = lxml.etree.tostring(adm, pretty_print=True)
|
||||
outfile.chna = chna
|
||||
@@ -150,13 +168,16 @@ def write_outfile_audio_data(outfile, shortest_file, sound_objects):
|
||||
cursor = cursor + to_read
|
||||
|
||||
|
||||
def write_muxed_wav(mix_pool: ObjectMixPool, scene, out_format, room_size, outfile, shortest_file):
|
||||
def write_muxed_wav(mix_pool: ObjectMixPool, scene, out_format, room_size,
|
||||
outfile, shortest_file):
|
||||
sound_objects = mix_pool.object_mixes
|
||||
attach_outfile_metadata(out_format, outfile, room_size, scene, sound_objects)
|
||||
attach_outfile_metadata(out_format, outfile,
|
||||
room_size, scene, sound_objects)
|
||||
write_outfile_audio_data(outfile, shortest_file, sound_objects)
|
||||
|
||||
|
||||
def mux_adm_from_object_mix_pool(scene, mix_pool: ObjectMixPool, output_filename, room_size=1.):
|
||||
def mux_adm_from_object_mix_pool(scene, mix_pool: ObjectMixPool,
|
||||
output_filename, room_size=1.):
|
||||
object_count = len(mix_pool.object_mixes)
|
||||
assert object_count > 0
|
||||
|
||||
@@ -178,11 +199,19 @@ def print_partition_results(object_groups, sound_sources, too_far_speakers):
|
||||
print(" - %s" % source.name)
|
||||
|
||||
|
||||
def partition_sounds_to_objects(scene, max_objects):
|
||||
def partition_sounds_to_objects(scene, max_objects) -> \
|
||||
tuple[list[list[bpy.types.Object]], list[list[bpy.types.Object]]]:
|
||||
"""
|
||||
Allocates sounds in the scene into non-overlapping lists of sounds. The
|
||||
second return value is the list of sounds that could not be allocated
|
||||
because the max_objects limit was exceeded.
|
||||
|
||||
Sounds are allocated to lists according to `group_speakers`.
|
||||
"""
|
||||
sound_sources = all_speakers(scene)
|
||||
|
||||
if len(sound_sources) == 0:
|
||||
return []
|
||||
return [], []
|
||||
|
||||
object_groups = group_speakers(sound_sources, scene)
|
||||
too_far_speakers = []
|
||||
@@ -196,7 +225,8 @@ def partition_sounds_to_objects(scene, max_objects):
|
||||
return object_groups, too_far_speakers
|
||||
|
||||
|
||||
def generate_adm(context: bpy.types.Context, filepath: str, room_size: float, max_objects: int):
|
||||
def generate_adm(context: bpy.types.Context, filepath: str, room_size: float,
|
||||
max_objects: int) -> set[str]:
|
||||
scene = context.scene
|
||||
|
||||
object_groups, _ = partition_sounds_to_objects(scene, max_objects)
|
||||
|
||||
@@ -8,12 +8,13 @@ from numpy.linalg import norm
|
||||
|
||||
from mathutils import Vector, Quaternion
|
||||
|
||||
|
||||
class FrameInterval:
|
||||
def __init__(self, start_frame, end_frame):
|
||||
self.start_frame = int(start_frame)
|
||||
self.end_frame = int(end_frame)
|
||||
|
||||
def overlaps(self, other : 'FrameInterval') -> bool:
|
||||
def overlaps(self, other: 'FrameInterval') -> bool:
|
||||
return self.start_frame <= other.start_frame <= self.end_frame or \
|
||||
other.start_frame <= self.start_frame <= other.end_frame
|
||||
|
||||
@@ -33,7 +34,7 @@ def compute_relative_vector(camera: bpy.types.Camera, target: bpy.types.Object):
|
||||
|
||||
# The camera's worldvector is norm to the horizon, we want a vector
|
||||
# down the barrel.
|
||||
camera_correction = Quaternion( ( sqrt(2.) / 2. , sqrt(2.) / 2. , 0. , 0.) )
|
||||
camera_correction = Quaternion((sqrt(2.) / 2., sqrt(2.) / 2., 0., 0.))
|
||||
relative_vector.rotate(camera_correction)
|
||||
|
||||
return relative_vector
|
||||
@@ -51,11 +52,11 @@ def room_norm_vector(vec, room_size=1.) -> Vector:
|
||||
The Pro Tools/Dolby Atmos workflow I am targeting uses "Room Centric" panner coordinates
|
||||
("cartesian allocentric coordinates" in ADM speak) and this process seems to yield good
|
||||
results.
|
||||
|
||||
|
||||
I also experimented with using normalized camera frame coordinates from the
|
||||
bpy_extras.object_utils.world_to_camera_view method and this gives very good results as
|
||||
long as the object is on-screen; coordinates for objects off the screen are unusable.
|
||||
|
||||
|
||||
In the future it would be worth exploring wether there's a way to produce ADM
|
||||
coordinates that are "Screen-accurate" while the object is on-screen, but still gives
|
||||
sensible results when the object is off-screen as well.
|
||||
@@ -67,19 +68,20 @@ def room_norm_vector(vec, room_size=1.) -> Vector:
|
||||
return vec / chebyshev
|
||||
|
||||
|
||||
def closest_approach_to_camera(scene, speaker_object) -> (float, int):
|
||||
def closest_approach_to_camera(scene, speaker_object) -> tuple[float, int]:
|
||||
"""
|
||||
The distance and frame number of `speaker_object`s closest point to
|
||||
the scene's camera.
|
||||
|
||||
|
||||
(Works for any object, not just speakers.)
|
||||
"""
|
||||
max_dist = sys.float_info.max
|
||||
at_time = scene.frame_start
|
||||
for frame in range(scene.frame_start, scene.frame_end + 1):
|
||||
scene.frame_set(frame)
|
||||
rel = speaker_object.matrix_world.to_translation() - scene.camera.matrix_world.to_translation()
|
||||
dist = norm(rel)
|
||||
rel = speaker_object.matrix_world.to_translation() - \
|
||||
scene.camera.matrix_world.to_translation()
|
||||
dist = float(norm(rel))
|
||||
|
||||
if dist < max_dist:
|
||||
max_dist = dist
|
||||
@@ -105,6 +107,10 @@ def speaker_active_time_range(speaker) -> FrameInterval:
|
||||
|
||||
|
||||
def speakers_by_min_distance(scene, speakers):
|
||||
"""
|
||||
Sorts a list of speaker objects in ascending order by their closest
|
||||
approach to the camera. Objects that approach closest are sorted highest.
|
||||
"""
|
||||
def min_distance(speaker):
|
||||
return closest_approach_to_camera(scene, speaker)[0]
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@ def adm_object_rendering_context(scene: bpy.types.Scene):
|
||||
|
||||
|
||||
class ObjectMix:
|
||||
def __init__(self, sources: List[bpy.types.Speaker],
|
||||
def __init__(self, sources: List[bpy.types.Object],
|
||||
scene: bpy.types.Scene, base_dir: str):
|
||||
self.sources = sources
|
||||
self.intermediate_filename = None
|
||||
@@ -65,6 +65,7 @@ class ObjectMix:
|
||||
|
||||
@property
|
||||
def mixdown_file_handle(self):
|
||||
assert self.mixdown_filename
|
||||
if self._mixdown_file_handle is None:
|
||||
self._mixdown_file_handle = open(self.mixdown_filename, 'rb')
|
||||
|
||||
@@ -146,7 +147,7 @@ class ObjectMixPool:
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
def __exit__(self, _exc_type, _exc_val, _exc_tb):
|
||||
for mix in self.object_mixes:
|
||||
mix.rm_mixdown()
|
||||
|
||||
@@ -156,7 +157,8 @@ class ObjectMixPool:
|
||||
return min(lengths)
|
||||
|
||||
|
||||
def object_mixes_from_source_groups(groups: List[List[bpy.types.Speaker]], scene, base_dir):
|
||||
def object_mixes_from_source_groups(groups: List[List[bpy.types.Object]],
|
||||
scene: bpy.types.Scene, base_dir: str):
|
||||
mixes = []
|
||||
for group in groups:
|
||||
mixes.append(ObjectMix(sources=group, scene=scene, base_dir=base_dir))
|
||||
|
||||
@@ -1,9 +1,12 @@
|
||||
def all_speakers(scene):
|
||||
import bpy
|
||||
|
||||
def all_speakers(scene: bpy.types.Scene) -> list[bpy.types.Object]:
|
||||
return [obj for obj in scene.objects if obj.type == 'SPEAKER']
|
||||
|
||||
|
||||
def solo_speakers(scene, solo_group):
|
||||
def solo_speakers(scene: bpy.types.Scene, solo_group: list[bpy.types.Object]):
|
||||
for speaker in all_speakers(scene):
|
||||
assert type(speaker.data) is bpy.types.Speaker
|
||||
if speaker in solo_group:
|
||||
speaker.data.muted = False
|
||||
else:
|
||||
@@ -14,5 +17,6 @@ def solo_speakers(scene, solo_group):
|
||||
|
||||
def unmute_all_speakers(scene):
|
||||
for speaker in all_speakers(scene):
|
||||
assert type(speaker.data) is bpy.types.Speaker
|
||||
speaker.data.muted = False
|
||||
speaker.data.update_tag()
|
||||
|
||||
86
operator_convert_particles_to_speakers.py
Normal file
86
operator_convert_particles_to_speakers.py
Normal file
@@ -0,0 +1,86 @@
|
||||
## This is copied from
|
||||
## https://blender.stackexchange.com/questions/4956/convert-particle-system-to-animated-meshes?answertab=active#tab-top
|
||||
#
|
||||
# And needs to be adapted
|
||||
|
||||
import bpy
|
||||
|
||||
# Set these to False if you don't want to key that property.
|
||||
KEYFRAME_LOCATION = True
|
||||
KEYFRAME_ROTATION = True
|
||||
KEYFRAME_SCALE = True
|
||||
KEYFRAME_VISIBILITY = True # Viewport and render visibility.
|
||||
|
||||
def create_objects_for_particles(ps, obj):
|
||||
# Duplicate the given object for every particle and return the duplicates.
|
||||
# Use instances instead of full copies.
|
||||
obj_list = []
|
||||
mesh = obj.data
|
||||
particles_coll = bpy.data.collections.new(name="particles")
|
||||
bpy.context.scene.collection.children.link(particles_coll)
|
||||
|
||||
for i, _ in enumerate(ps.particles):
|
||||
dupli = bpy.data.objects.new(
|
||||
name="particle.{:03d}".format(i),
|
||||
object_data=mesh)
|
||||
particles_coll.objects.link(dupli)
|
||||
obj_list.append(dupli)
|
||||
return obj_list
|
||||
|
||||
def match_and_keyframe_objects(ps, obj_list, start_frame, end_frame):
|
||||
# Match and keyframe the objects to the particles for every frame in the
|
||||
# given range.
|
||||
for frame in range(start_frame, end_frame + 1):
|
||||
print("frame {} processed".format(frame))
|
||||
bpy.context.scene.frame_set(frame)
|
||||
for p, obj in zip(ps.particles, obj_list):
|
||||
match_object_to_particle(p, obj)
|
||||
keyframe_obj(obj)
|
||||
|
||||
def match_object_to_particle(p, obj):
|
||||
# Match the location, rotation, scale and visibility of the object to
|
||||
# the particle.
|
||||
loc = p.location
|
||||
rot = p.rotation
|
||||
size = p.size
|
||||
if p.alive_state == 'ALIVE':
|
||||
vis = True
|
||||
else:
|
||||
vis = False
|
||||
obj.location = loc
|
||||
# Set rotation mode to quaternion to match particle rotation.
|
||||
obj.rotation_mode = 'QUATERNION'
|
||||
obj.rotation_quaternion = rot
|
||||
obj.scale = (size, size, size)
|
||||
obj.hide_viewport = not(vis) # <<<-- this was called "hide" in <= 2.79
|
||||
obj.hide_render = not(vis)
|
||||
|
||||
def keyframe_obj(obj):
|
||||
# Keyframe location, rotation, scale and visibility if specified.
|
||||
if KEYFRAME_LOCATION:
|
||||
obj.keyframe_insert("location")
|
||||
if KEYFRAME_ROTATION:
|
||||
obj.keyframe_insert("rotation_quaternion")
|
||||
if KEYFRAME_SCALE:
|
||||
obj.keyframe_insert("scale")
|
||||
if KEYFRAME_VISIBILITY:
|
||||
obj.keyframe_insert("hide_viewport") # <<<-- this was called "hide" in <= 2.79
|
||||
obj.keyframe_insert("hide_render")
|
||||
|
||||
def main():
|
||||
#in 2.8 you need to evaluate the Dependency graph in order to get data from animation, modifiers, etc
|
||||
depsgraph = bpy.context.evaluated_depsgraph_get()
|
||||
|
||||
# Assume only 2 objects are selected.
|
||||
# The active object should be the one with the particle system.
|
||||
ps_obj = bpy.context.object
|
||||
ps_obj_evaluated = depsgraph.objects[ ps_obj.name ]
|
||||
obj = [obj for obj in bpy.context.selected_objects if obj != ps_obj][0]
|
||||
ps = ps_obj_evaluated.particle_systems[0] # Assume only 1 particle system is present.
|
||||
start_frame = bpy.context.scene.frame_start
|
||||
end_frame = bpy.context.scene.frame_end
|
||||
obj_list = create_objects_for_particles(ps, obj)
|
||||
match_and_keyframe_objects(ps, obj_list, start_frame, end_frame)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
40
requirements_dev.txt
Normal file
40
requirements_dev.txt
Normal file
@@ -0,0 +1,40 @@
|
||||
asttokens==3.0.0
|
||||
attrs==21.4.0
|
||||
certifi==2025.10.5
|
||||
charset-normalizer==3.4.4
|
||||
cython==3.2.0
|
||||
decorator==5.2.1
|
||||
ear==2.1.0
|
||||
executing==2.2.1
|
||||
fake-bpy-module-4-3==20250130
|
||||
idna==3.11
|
||||
ipython==9.7.0
|
||||
ipython-pygments-lexers==1.1.1
|
||||
jedi==0.19.2
|
||||
lxml==4.9.4
|
||||
mathutils==3.3.0
|
||||
matplotlib-inline==0.2.1
|
||||
multipledispatch==0.6.0
|
||||
mypy==1.18.2
|
||||
mypy-extensions==1.1.0
|
||||
numpy==1.26.4
|
||||
parso==0.8.5
|
||||
pathspec==0.12.1
|
||||
pexpect==4.9.0
|
||||
pip==25.0.1
|
||||
prompt-toolkit==3.0.52
|
||||
ptyprocess==0.7.0
|
||||
pure-eval==0.2.3
|
||||
pygments==2.19.2
|
||||
requests==2.32.5
|
||||
ruamel-yaml==0.18.16
|
||||
ruamel-yaml-clib==0.2.14
|
||||
scipy==1.16.3
|
||||
setuptools==78.1.0
|
||||
six==1.17.0
|
||||
stack-data==0.6.3
|
||||
traitlets==5.14.3
|
||||
typing-extensions==4.15.0
|
||||
urllib3==2.5.0
|
||||
wcwidth==0.2.14
|
||||
zstandard==0.25.0
|
||||
Reference in New Issue
Block a user