Jump to content

HappyPolygon

Registered Member
  • Posts

    1,898
  • Joined

  • Last visited

  • Days Won

    94

Everything posted by HappyPolygon

  1. HappyPolygon

    Skybox Lab

    What did you prompt it to get those guys wearing headphones during a Victorian lunch ? 🤣 One small inconvenience : The images are not HDRI... ☹️ I've tested video games ! It amazed me how it generated multi-level worlds that could actually be playable. Some got even characters in there ! In this one it turned Spyro to a dog I would play that world. This one had even the title floating ! This one caught me by surprise Just look at this ! The way the edges are drawn to look like gears... Totally a possible Spyro game level. That's a Mario I have a theory of how this AI works. There's really nothing like it in the internet. It wins in the following key points: Consistent object style (It won't draw every single tree in a different style in a forest for example, it will stick to 2 or 3 species. I won't draw different kinds of mountains) Consistent depth (It has an excellent sense of how perspective works and blends well close, mid-range and far landscape parts) Sensical object positioning (it will usually position the camera in the center of a crossroads and place nicely objects around to make the scene interesting) This leads me to believe there is some kind of "Houdini-like" engine running behind. Both Houdini and Unreal Engine are able to produce procedural worlds. There could be some elaborate template algorithm that is tasked to make low-poly, proxy-based worlds that are being fed to a Dall-e/Canvas-like (tuned to certain styles/training sets) AI to produce consistent images. For example it could have generated a scene with some crude fractal landscape, then some scattered secondary fractal objects for the background, then populate the scene with cones and cubes. The AI is tuned to recognize cubes as houses, Cones as trees, far fractals as mountains and so on based on their color... Even polygon selections can be generated to be painted differently as to recognize when to generate sand, grass or rock. Everything else is dictated by the preset styles heavily influenced by different training sets.
  2. To be honest except for Chris Schmidt, nothing really caught my interest C4D-wise that is. Maybe because I'm an advanced user. Other than that the ZBrush and Unreal presentations where interesting.
  3. Unfortunately back then (I was 11 or 12 years old when found out about Bryce and 3D software) the only way to know about things was through magazines. I learned about Kai's Power Tools a year ago ! I never new MetaCreations ever had Poser... Renderworld seems interesting can't find any sites on the internet talking about it or screenshots of it. I found it was used to create some of my favorite games ? A software that has come a long way and still is relevant to many professionals is iClone. I remember back in 2006 or so that we used it (as kids) to add our voices to cartoons (CrazyTalk it was called back then)
  4. It's been 3 days now that I watch NAB 2023 Every single presenter showing how easy and fast Soft Bodies are: - ... and we make a Soft Body tag that looks like a balloon on the object we want to inflate and we press play ! *the object just drops* - Ooops, forgot to make a floor. *object passes through plain* - And this happens because we need to set a collider ... me:
  5. Recently I found out that Poser still exists ... POVray is still alive, I still have no idea how people use it TopMod is still up (interesting modelling tools) Astonished to find out Wings3D is still maintained
  6. I was going to propose the Track Modifier Tag. https://help.maxon.net/c4d/en-us/Default.htm#html/TCAANIMATIONMODIFIER.html?TocPath=Object%20Manager%7CTags%20Menu%7CAnimation%20Tags%7CTrack%20Modifier%7C_____0 There is a Quantization effect that acts like a stop-motion effect. So you could animate the second hand as usual and have it quantized at constant intervals.
  7. How the hell did GPT-4 help you with that ? It does XPresso now ?
  8. HappyPolygon

    Skybox Lab

    I can't quite pin-point its art style. It looks like WoW. It reminds me somehow of these games: RiME The Witness No Mans Sky If it had some more detail it would look like Ori Still very pleasing to watch though. I've been hitting generate with the same "Alice in wonderland" prompt all night yesterday it surprised me every time. Something I couldn't get it to generate is caves/caverns. For some reason it gets Sci-fi and generates room interiors.
  9. I wonder the same about Rovio! Why would anyone pay 1B$ for a company that is in free fall ? If the audience has had enough with AngryBirds then there is no logic in to buying all assets for that high of a price. On the contrary, if something looses so much value it should change hands for a much lower price. It's like Rovio wants to exit with its last big win (erasing years of loses). Well, they can can sell the company in whatever price they want but at least the buyers should have their own free choice of choosing not to buy until Rovio drops the price.
  10. You have to check this out !!! https://skybox.blockadelabs.com/ Magnitudes more impressive for my standards. Just found out about it.
  11. @dast's latest dial plugin Reminded me of an other old idea. Directional Selection Difficulty Level 10/10 Description: A live selection enhancement tool. Provides an easy element-to-element manual selection based on relations between elements. It works by selecting a single element (point, polygon or edge) of a geometric surface. A visual Dial GUI is drawn on top of that element. The Dial is separated in sections depending the number of neighboring elements of the same type. Each sector has a shortcut key. By pressing one of the option keys, the corresponding element is added to the selection. Once the new element is added the Dial is moved on that element offering new options. The image depicts the Dial and the current selection of the point. The point is connected to 5 other points. So the options to move to one of those are five. My initial thought was to add as options the AWEDXZ keys that are around the A key and people are more used to them from games but also the numeric keypad works as a clock offering up to 8 option around the "5" key, but that is more suited for left-handed people (assuming the user never leaves his mouse). Extra feature: Auto-frame last selected element. I give this plugin a 10/10 difficulty for the following reasons: As far as I know Edges are not part of the data structure tree. So finding relations between them is a bit tricky. Drawing a Dial with sectors as in the picture requires some viewport optical analysis which I don't think is possible. Auto-framing can be tricky because the native command doesn't check if the selected element is visible to the user, it just frames it, which means it could be hidden from other parts of the object geometry.
  12. What do the Green-Blue-White colors indicate ?
  13. I got a bit confused here... Do you need the gradient to be linked to the camera or only to the object ? (You mentioned z-depth and this got me confused)
  14. Could you share a screenshot of how bad it is ? I use Foxit Reader since 2014. What are "Adobe's proprietary dialogs" ?
  15. HappyPolygon

    Venom effect

    The basic principle is this venom.c4d and it's the shittiest thing I've ever uploaded in this forum because I use R23. In R25+ I would 1) use the Axis Capsule to make the parametric arcs align with the main path. (didn't make arcs editable so you could figure out how things work) 2) find/create a capsule that will connect matrices that are in close proximity (Enable Tracer and Matrix objects) to create secondary branching. You can also fake it a bit by modeling different spewing slimes and using the same principle to make them grow from the main path. The slimes will have to be arced so you can scale them down in the Y axis as the field passes by to emulate the relax effect (pivot point to the start of the spline).
  16. Yes, just enable bump and add a white color. It will automatically use it to cover the entire texture.
  17. Make a second material with the label and follow the instructions here to use texture layering. (bottom pf page)
  18. Maybe you need do use the Connect object or merge all individual objects as one mesh.
  19. HappyPolygon

    Signs

    Lathe Lane I've never traveled abroad so I don't know how international signage works... When multiple lanes/roads converge is it called a ?
  20. Maybe you've positioned the pivot point (axis center) on some edge and not in the center of the mass of the object ? Usually these transformation errors occur when the local coordinates don't align correctly with the global coordinates. This commonly occurs when the object that is supposed to transform (in space that is) is transformed in reference to some other object usually a parent. Hard to reach a conclusion without a scene file.
  21. Mudbox 2024 The only listed change in Mudbox 2024 is the new macOS installer, since the software now runs on current Apple Silicon Macs, albeit via Apple’s Rosetta 2 emulator rather than native Apple Silicon support. The release is the third consecutive annual update to make no changes to the software beyond a new installer, although the cost of subscriptions rises again this year. Luma AI (Unreal Plugin) https://cdn-luma.com/public/lumalabs.ai/videos/21-Nov-2022.2/landscape.mp4 Luma AI is a pioneer in the field of 3D capture based on Neural Radiance Fields (NeRF): a new method for training a neural network to optimise a volumetric representation of a scene from a set of source images. Although its creators focused on NeRF as a means of synthesising new views of a scene, the volumetric representation can be converted into a 3D mesh, making it an alterative to photogrammetry for 3D scanning. In the case of Luma AI, captures can be generated from video captured on an iPhone using its iOS app or from video or zip files of images uploaded to its web app, both currently free to use in beta. Users of the web app can download captures of objects in glTF, OBJ or USDZ format and captures of scenes in PLY format – and now, as Luma Field files for use with the new Unreal Engine plugin. Luma AI’s Unreal Engine plugin is compatible with Unreal Engine 5.0+ running on Windows only. It’s currently free in alpha. The web app should work with any standard desktop browser, and is currently free in beta. https://docs.lumalabs.ai/9DdnisfQaLN1sn Plask Generative AI Although Plask is best known for its browser-based AI mocap system, now renamed Plask Motion, the new service is its first move into generative AI. It lets users generate 2D character images for illustrations or concept art work, and to do so with the characters in specific poses, matching those of a 3D mannequin in Plask Generative AI’s user interface. Both male and female mannequins are available, and can be posed in ways that will be familiar to users of DCC software, with position and rotation gizmos for each joint in the character’s skeleton. It’s also possible to upload a source photo, and have Plask Generative AI match the pose from it. Once a pose has been established, users can guide the visual style of the image generated by entering text prompts, or picking from a range of presets: there are options for both realistic and anime-style images. Users can also adjust a small set of control parameters, including picking from the samplers currently available in Stable Diffusion. Plask Generative AI is currently available free in public beta. It should run in any standard desktop web browser. Plask hasn’t announced a commercial release date or pricing yet. https://docs.plask.ai/ Sapphire 2023.5 Builder, Sapphire’s node-based framework for creating custom effects and transitions, gets a number of workflow improvements, including the option to set the value of one parameter to drive that of others. It is also now possible to save custom effects as presets that can be shared across host applications. The LensFlare plugin gets a further 12 presets, bringing the total to 145. Sapphire 2023.5 is available for a range of compositing and editing software, including After Effects, DaVinci Resolve, Flame and Nuke, on Windows 10+, CentOS/RHEL 7+ Linux and/or macOS 10.14+. New licences cost $1,695 for the Avid and Adobe/OFX editions, or $2,795 for a multi-host licence. https://borisfx.com/release-notes SynthEyes 2304 SynthEyes 2304 expands the software’s lens modeling capabilities, adding standardised radial and anamorphic lens models to the solver and image preprocessor. With these models, lens distortion can be exported directly to the built-in nodes of compatible applications like Nuke and Fusion, without requiring STMap images or lens workflow processing. It is also now possible to solve for anamorphic distance, to accommodate shots filmed with anamorphic lenses with different horizontal and vertical nodal point locations In addition, lens parameters can now be animated, both frame by frame and via a new linear interpolation model. The workflow makes it possible to reduce jitter when solving shots with focus breathing. You can see a full list of changes via the link at the foot of the story. SynthEyes 2304 is available for Windows 10+, CentOS 7 Linux and macOS 10.13+. New licences cost from $299 to $699, depending on whether you buy the Intro or Pro edition of the software, and for which platforms. https://www.ssontech.com/content/bigrcl.html#se2304b1056 DaVinci Resolve 18.5 DaVinci Resolve 18.5 is a major update, adding initial support for USD-based workflows in both editions of the software, plus a new AI-based relighting toolset and improved AI-based video upscaling in Studio. The Studio edition also gets new AI-based video editing tools, including AI-based audio categorisation, transcription and captioning. Both updates were released at NAB 2023. VFX artists working in movie pipelines based around the Universal Scene Description format get the option to import USD and USDZ files, including geometry, materials, lights, cameras and animation. Other new features relevant to visual effects work include Multi-merge, a new tool for managing multiple foreground sources as a composited layer stack, shown at 22:10 in the video at the top of the story. Multi-merge it possible to composite shots using a layer-based workflow as well as the node-based workflow available in Fusion, Resolve’s 3D compositing environment. In addition, DaVinci Resolve Studio’s native AI depth map generator is now available inside Fusion. Colorists and effects artists using Studio get Relight, a new AI-based shot relighting system. With it, artists can add virtual direction, point or spot lights to a shot, and adjust their colour, surface softness and specularity, as shown at 23:40 in the video at the top of the story. Light intensity information is placed in the alpha channel for use with any of Resolve’s existing grading tools. Studio’s existing Super Scale feature for up-resing video gets a new 2x Enhanced mode for “extremely high quality 2x output”, with manual controls to adjust noise reduction and images sharpness. Colorists can also now add their own marker overlays and annotations to footage in the color viewer, making it possible for supervisors to flag up parts of a sequence that need adjusting. The 18.5 updates also improve DaVinci Resolve’s handling of projects with missing LUTs. Rather than interrupting playback with a warning dialog, missing LUTs are now shown via an overlay at the bottom of the screen. The files can be relinked via a Missing LUTs tab in the LUT gallery. Other workflow improvements include the option to override color management settings on a per-timeline basis, and to copy color grades to all of the angles within multicam shots when flattening multicam clips. In addition, DaVinci Resolve now automatically creates the inputs and outputs required by Resolve FX effects plugins, removing the need to drag plugins as independent FX nodes. Video editors using DaVinci Resolve Studio get a number of new AI-based tools for working with the audio in media clips. The software can now automatically sort clips by type (dialogue, effects or music), and can also automatically transcribe dialogue, and generate text captions in a new subtitle track. Audio artists get support for edit and mix groups in DaVinci Resolve’s Fairlight toolset, support for nesting VCAs, and the option to stream from external database applications like Soundminer. Under the hood, the edit timeline playback engine has been “hugely improved”, which should smooth playback on lower-powered systems where a project cannot be played back in real time. There is also a new dedicated render cache management window for setting the size of the data caches DaVinci Resolve generates to improve playback performance. Changes to the media formats supported include the option to export GIF, PNG and JPEG image sequences and animated GIFs; and encode support for ProRes, AV1, H.264, MP3 and AAC in MKV containers. Users can also now upload video directly to TikTok from DaVinci Resolve. There are also a number of changes to the remote monitoring toolset in DaVinci Resolve studio, including an upcoming app for iPhones and iPads – not available in the current beta, but due in the final release. DaVinci Resolve 18.5 and DaVinci Resolve Studio 18.5 are available in beta for Windows 10+, macOS 12.0+ and CentOS 7.3+/Rocky Linux 8.6+. The updates are free to existing users. The base edition of the software is free; the Studio edition, which adds the AI features, stereoscopic 3D tools, HDR grading and more Resolve FX filters and Fairlight FX audio plugins, costs $295. https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=179277 Fusion Studio 18.5 A version of the Fusion toolset is included in DaVinci Resolve, Blackmagic Design’s colour grading, editing and visual effects software, version 18.5 of which has also just been released in beta. DaVinci Resolve gets more promotional support than Fusion Studio, so the demo video above is actually for DaVinci Resolve 18.5, but the new features shown are common to both applications. Initial USD support for movie VFX pipelines One of the key changes in Fusion Studio 18.5 is initial USD support. Visual effects artists working in pipelines based around the Universal Scene Description get the option to import data in USD format, including geometry, materials, lights, cameras and animation. The implementation supports .usdc, .usda and .usdz files. Another significant new feature is Multi-merge, a new tool to “connect and manage multiple foreground sources as a composited layer stack”. It makes is possible to composite shots using a layer-based workflow along the lines of that in tools like After Effects, as well as through the node-based workflow traditionally used in Fusion. Users can change the order or visibility of foreground sources via a Layer List palette in the Inspector, and edit layer properties like position, size and apply mode via the layer properties palette below. AI-based depth map generator and GPU-accelerated Clean Plate and Anaglyph features Fusion Studio also integrates DaVinci Resolve Studio’s AI-based depth map generator, which automatically generates depth maps from source footage, for use in generating effects like fog and depth of field. The release notes also list GPU-accelerated Clean Plate and Anaglyph features, although there is no more information than that, and “up to 3x faster renders” when using the splitter tool. Fusion Studio 18.5 is available for Windows 10+, macOS 12.0+ and CentOS 7.3+/Rocky Linux 8.6+. New licences of Fusion Studio cost $295. The update is free to existing users. https://www.blackmagicdesign.com/support/readme/9f5961cf5c6f47679e921de9de93d388 RenderMan 25 Although RenderMan 25 isn’t as large an update as 2021’s milestone RenderMan 24, it has its own major new feature, in the shape of the AI denoiser used on every Pixar movie since Toy Story 4. It is tailored to both photorealistic VFX and stylised animation, having been trained using production data from ILM, Pixar and Walt Disney Animation Studios, and is designed to preserve complex details in shots. According to Pixar, it “excels on complex, detailed imagery that would cause other denoisers to fail”, such as shots containing a lot of hair and fur, or FX shots with complex points-based effects. The release also features updates to the major toolsets added in RenderMan 24, including MaterialX Lama, the Industrial Light & Magic-developed MaterialX-based material layering system. Changes in RenderMan 25 include a new material response for iridescent materials, LamaIrisescence, which includes the option to map the colours generated to an artist-directed colour set. In addition, Lama now more accurately replicates the way that light interacts between material layers. Stylized Looks, RenderMan’s new non-photorealistic rendering toolset, has also been updated. New features include a Distort option in the lines shader, which applies fractal distortion to rendered outlines, creating a more naturalistic, hand-drawn effect. The update also adds more presets for hatching effects. RenderMan XPU, RenderMan’s new hybrid CPU/GPU render engine, gets a significant update, with the latest release introducing support for volumes, deformation motion blur, and AOVs, LPEs and trace groups. XPU also now supports more features of PxrCamera, including depth of field and chromatic aberration, and improves interactive performance, particularly on GPUs with limited graphics memory. According to Pixar, the changes mean that XPU is now suitable for use on “for look development beyond hard surfaces”, although it is still intended for look dev rather than final-quality output. In addition, RenderMan has been updated to a newer version of VFX Reference Platform, although only the CY2021 specification, not the newer version of the spec supported by other key VFX tool developers. Deprecated features include PxrSeExpr, RenderMan’s implementation of SeExpr, the Disney-developed expression language used in Maya’s XGen toolset. RenderMan’s RIS-only C++ patterns for shader authoring have also been superseded by OSL equivalents, which work in both RIS and XPU. RenderMan 25 is available for Windows 10, CentOS/RHEL 7.2+ Linux and macOS 10.14+. The plugins are compatible with Blender 2.83/2.93/3.0+, Houdini 18.5+, Katana 4.0+ and Maya 2020-2023, plus Mari 4.5+. New node-locked or floating licences cost $595. There is also a free non-commercial edition of RenderMan, which has also been updated to version 25. Pixar has released the free Non-Commercial edition of RenderMan 25, the latest version of the renderer. The release includes many of the key features from RenderMan 25 itself, including the new AI denoiser and the updated version of RenderMan XPU, Pixar’s hybrid CPU/GPU rendering system. A free licence of RenderMan for non-commercial work and tools development As with previous versions of Non-Commercial RenderMan, the new release can be used for personal projects, research, and tools development, including the development of commercial plugins and assets. Output isn’t watermarked, although Pixar stipulates that users should add its ‘rendered with RenderMan’ logo to the credits of any project released publicly. It also includes many of the key features from the commercial edition, including the new AI denoiser, used by Pixar on all of its own movies since Toy Story 4. In addition, Non-Commercial RenderMan now includes RenderMan XPU, Pixar’s new hybrid CPU/GPU rendering system, which wasn’t included in the previous non-commercial release. However, one feature still not included in Non-Commercial RenderMan is Stylized Looks, RenderMan’s non-photorealistic rendering toolset, since it was developed in partnership with Lollipop Shaders. Non-Commercial RenderMan 25 is available for Windows 10, CentOS/RHEL 7.2+ Linux and macOS 10.14+. Plugins are available for Blender 2.83/2.93/3.0+, Houdini 18.5+, Katana 4.0+ and Maya 2020-2023. To download it, you will need to register for a free account on Pixar’s forum, which entitles you to two node-locked licences. The licence times out after 120 days, after which you have to renew. https://renderman.pixar.com/intro HeliumX Lite for After Effects HeliumX has released HeliumX Lite, a free plugin for importing and rendering 3D models inside After Effects, and extruding and animating 3D text. The plugin is a cut-down version of Helium, HeliumX’s wide-ranging 3D toolset for Adobe’s compositing app. Although After Effects can create simple 3D text, and can now import 3D models natively, at least in current beta builds, HeliumX Lite opens up a wider range of 3D workflows. As well as importing 3D models in OBJ or FBX format, or adding 3D primitives to After Effects compositions, users can extrude and animate 3D text and paths, and use the plugin’s readymade 3D page scroll effect. The full edition of Helium also includes toolsets for creating and animating arrays of objects for motion graphics work, generating 3D terrain, and creating and rendering volumetric smoke and fog. HeliumX Lite is available free for After Effects 2020+ on Windows 10+ and macOS 10.14+. Helium is rental-only, and is available via aescripts + aeplugins, priced at $50/year for the first year; $60/year thereafter. https://heliumx.tv/download/ Zoo Chat GPT Create 3d Characters has released Zoo Chat GPT, a new tool that integrates ChatGPT inside Maya. The tool, which is currently in beta, lets artists use OpenAI’s AI chatbot to write simple Python and MEL scripts inside the 3D animation software, or to control Maya using natural-language commands. It is available in Zoo Tools Pro 2.7.6, the latest version of Create 3d Characters’ Maya productivity tools. Zoo Chat GPT looks to have a similar range of uses to Maya Assist, Autodesk’s own new AI toolset based on OpenAI’s technology, which is currently in private beta. The tool puts a ChatGPT window inside the Maya interface, into which users can type natural-language commands like ‘Duplicate the object “tree_geo” twenty times”. ChatGPT then generates Python or MEL code that Zoo Chat GPT streams back in to Maya. Clicking the Play button inside Zoo Chat GPT’s UI executes the code, hopefully causing Maya to perform the action required – in this case, creating 20 duplicates of a 3D tree. Other examples include assigning random colours to different parts of a 3D model, while the video in this tweet shows ChatGPT in use to create a custom Maya interface panel. It’s also possible to use ChatGPT as an interactive help function, typing in plain-language questions about Maya or 3D terminology in general, and getting concise answers. Zoo Chat GPT uses Open AI’s API, which currently provides access to the Chat-GPT 3.5 models, rather than the newest Chat-GPT 4 model. It was trained using online data extending up to mid-2021, so it will lack information about new features in versions of Maya released after that: from Maya 2022.3 onwards. It’s also important to note that Chat-GPT does make mistakes. However, when it fails, ChatGPT can be prompted to rewrite its own code, making it possible to tackle more complex tasks iteratively, and when it works, it can be a handy time-saver. Zoo Chat GPT is available in beta as part of Zoo Tools Pro 2.7.6+. It is compatible with Maya 2022+. To use it, you need to link it to an OpenAI API key, which you can get by registering for a free OpenAI account. Generating code then consumes tokens, which are currently priced at $0.002 per thousand (which equates to approximately 750 words of code) for the gpt-3.5-turbo model. Zoo Tools Pro itself is compatible with Maya 2017+ on Windows, Linux and CentOS Linux, and is available by taking out at least one month’s subscription to the Create 3d Characters website. You can continue to use the software commercially after the subscription ends, so you effectively get a perpetual licence with the first month, which costs $40; after that, it’s $10/month. https://create3dcharacters.com/ 3Dconnexion support for C4D 3Dconnexion and Maxon have collaborated on a new, streamlined and powerful integration between Cinema 4D and the 3Dconnexion product line. 3Dconnexion, manufacturer of the well-known SpaceMouse® range of 3D mice, and Maxon, creator of powerful solutions for artists in motion graphics, visual effects and visualization, have teamed up on a new solution for Cinema 4D, aiming to offer the best possible user experience to their content creator community. The new integration, available for both Windows and Mac users, was able to come to life through a close partnership between both sides’ engineering teams, who have been putting their mutual users’ needs at the heart of this project. It ensures that all known and appreciated 3Dconnexion features are available on all devices - from SpaceMouse, to CadMouse, Keyboard Pro and Numpad. The new 3Dconnexion solution for Cinema 4D users is included in the latest 3Dconnexion driver – 3DxWare® – which can be downloaded freely from the 3Dconnexion website. Through a series of new and improved software integrations, 3Dconnexion has recently expanded its offering to the digital content creation field for artists, 3D creators and game developers. The new Cinema 4D solution adds to a portfolio of industry-standard software applications featuring full support for 3Dconnexion devices, such as Unreal Engine, Unity, ZBrush and many others. This way, the company seeks to support creators throughout their entire workflow, with unified navigation and responsive features ranging across the entire application suite. https://3dconnexion.com/gr/applications/maxon-computer-cinema-4d/ Isotropix has discontinued Clarisse Users have begun sharing emails from Isotropix customer support confirming that development of Clarisse and CNode, its GUI-free edition for render farms, has been discontinued. You can see the text of the message in this Reddit post: a similarly worded email has been forwarded independently to CG Channel by another user. The Isotropix website has been updated to allow logins from Enterprise users only: a message informs consumer customers that they “no longer have access to the User Account”, and to contact support by email. LightWave Digital acquired LightWave Announcing the news on the Discord server, Bishop posted: “We picked up the drives and servers an hour or so ago – Deuce is driving them back now. WE OWN LIGHTWAVE 3D.” At the time of writing, neither the LightWave or Vizrt websites has been updated with the news, but you can find the text of the official announcement in the Discord post. Vizrt CEO Michael Hallén is quoted as saying, “We did not make this decision lightly, but the aim has always been to ensure LightWave 3D has a future with a team that has a strong desire to breathe new life into [it].” In another Discord post, Bishiop days: “So, now the hard work starts – I promise you guys, we will make this great again … over the next 5 years + we will transform LightWave back into a state of the art 3D package”. “We aim to have an upgrade out in around 6 months and follow that up with a second 6 months after that.” LightWave Digital aims to announce the upgrade path from existing versions of the software – the final Vizrt release was LightWave 2020 – on 4 May 2023. Mocha Pro 2023 The release features workflow improvements throughout the software’s rotoscoping, tracking and rendering toolsets, with many changes specifically intended to streamline workflow on complex shots. The rotoscoping toolset gets a new Falloff tool, shown at 00:30 in the video above, intended to make the process of making larger adjustments to roto splines more organic. Users can adjust the radius and strength of the falloff gradient to control how many points are affected while dragging the spline around in the viewport. It is also now possible to shrink or grow splines based on edge angle rather than direct scaling, which should mean that less manual adjustment is needed when expanding complex matte shapes. To streamline work on complex shots, a new Split Contours option splits the points selected into a separate layer, retaining all of their keyframes, making it possible to split large roto shapes into sets of smaller ones. It is also possible to create a duplicate layer that removes all existing keyframes, but retains the current shape of the roto spline, intended to make it easier to break up tracks into sections. Changes to the tracking toolset include a new Merge Tracks option to merge multiple layers of tracking data into one, intended to streamline the process of exporting more complex tracks. The Insert Module gets a new Difference blending mode, intended to make it easier to line up inserts with footage, and improvements to the Region of Interest cropping system. The Remove Module gets a new Static Scene option that makes it possible to remove moving objects from locked shots without having to track them, reducing processing time. Pipeline changes include support for reading 12-bit DPX images, and for VFX Reference Platform 2022. The OFX plugin edition of Mocha Pro gets the option generate tracking data directly inside Nuke, rather than having to copy it to the clipboard or export it to disk. There are also a number of smaller changes: you can find a full list via the links below. Mocha Pro 2023 is available as a standalone application, and as Adobe, Avid and OFX plugins. The standalone edition is compatible with Windows 8.1+, RHEL/CentOS 7+ Linux and macOS 10.13+. The plugins are compatible with After Effects and Premiere Pro CC 2014+, Media Composer 8+, Nuke 8+, Flame 2020/2021, Fusion Studio 8+, Silhouette 2020+ and Vegas Pro 14+. A new licence costs $695 for the Adobe, Avid or OFX plugins, $995 for all the plugins, or $1,495 for the standalone edition plus all of the plugins. https://borisfx.com/release-notes/mochapro-2023-release-notes#_new_in_mocha_pro_2023_v10_0_0
  22. Are you also rotating Generators (cloners, sub-division surfaces etc), Effectors or Deformers ?
×
×
  • Create New...