Jump to content

Leaderboard

  1. HappyPolygon

    HappyPolygon

    Registered Member


    • Points

      7

    • Posts

      1,898


  2. Cerbera

    Cerbera

    Community Staff


    • Points

      4

    • Posts

      17,859


  3. Chester Featherbottom

    Chester Featherbottom

    Registered Member


    • Points

      3

    • Posts

      94


  4. koko erico

    koko erico

    Registered Member


    • Points

      3

    • Posts

      7


Popular Content

Showing content with the highest reputation on 04/07/2022 in all areas

  1. As long as it can't put a red cube on top of a blue cube, we're safe. But once it can do a chrome sphere on a checkerboard floor...
    3 points
  2. Forget AI images like these that look like quick Corel Painter concept art or quick PS compositions with DoF covered backgrounds : Today the San Francisco-based lab announced DALL-E’s successor, DALL-E 2. It produces much better images, is easier to use, and—unlike the original version—will be released to the public (eventually). DALL-E 2 may even stretch current definitions of artificial intelligence, forcing us to examine that concept and decide what it really means. "Teddy bears mixing sparkling chemicals as mad scientists, steampunk" / "A macro 35mm film photography of a large family of mice wearing hats cozy by the fireplace" Image-generation models like DALL-E have come a long way in just a few years. In 2020, AI2 showed off a neural network that could generate images from prompts such as “Three people play video games on a couch.” The results were distorted and blurry, but just about recognizable. Last year, Chinese tech giant Baidu improved on the original DALL-E’s image quality with a model called ERNIE-ViLG. DALL-E 2 takes the approach even further. Its creations can be stunning: ask it to generate images of astronauts on horses, teddy-bear scientists, or sea otters in the style of Vermeer, and it does so with near photorealism. The examples that OpenAI has made available (see below), as well as those I saw in a demo the company gave me last week, will have been cherry-picked. Even so, the quality is often remarkable. "One way you can think about this neural network is transcendent beauty as a service,” says Ilya Sutskever, cofounder and chief scientist at OpenAI. “Every now and then it generates something that just makes me gasp." Diffusion models are trained on images that have been completely distorted with random pixels. They learn to convert these images back into their original form. In DALL-E 2, there are no existing images. So the diffusion model takes the random pixels and, guided by CLIP, converts it into a brand new image, created from scratch, that matches the text prompt. The diffusion model allows DALL-E 2 to produce higher-resolution images more quickly than DALL-E. “That makes it vastly more practical and enjoyable to use,” says Aditya Ramesh at OpenAI. In the demo, Ramesh and his colleagues showed me pictures of a hedgehog using a calculator, a corgi and a panda playing chess, and a cat dressed as Napoleon holding a piece of cheese. I remark at the weird cast of subjects. “It’s easy to burn through a whole work day thinking up prompts,” he says. "A sea otter in the style of Girl with a Pearl Earring by Johannes Vermeer" / "An ibis in the wild, painted in the style of John Audubon" DALL-E 2 still slips up. For example, it can struggle with a prompt that asks it to combine two or more objects with two or more attributes, such as “A red cube on top of a blue cube.” OpenAI thinks this is because CLIP does not always connect attributes to objects correctly. As well as riffing off text prompts, DALL-E 2 can spin out variations of existing images. Ramesh plugs in a photo he took of some street art outside his apartment. The AI immediately starts generating alternate versions of the scene with different art on the wall. Each of these new images can be used to kick off their own sequence of variations. “This feedback loop could be really useful for designers,” says Ramesh. One early user, an artist called Holly Herndon, says she is using DALL-E 2 to create wall-sized compositions. “I can stitch together giant artworks piece by piece, like a patchwork tapestry, or narrative journey,” she says. “It feels like working in a new medium.” User beware DALL-E 2 looks much more like a polished product than the previous version. That wasn’t the aim, says Ramesh. But OpenAI does plan to release DALL-E 2 to the public after an initial rollout to a small group of trusted users, much like it did with GPT-3. (You can sign up for access here.) GPT-3 can produce toxic text. But OpenAI says it has used the feedback it got from users of GPT-3 to train a safer version, called InstructGPT. The company hopes to follow a similar path with DALL-E 2, which will also be shaped by user feedback. OpenAI will encourage initial users to break the AI, tricking it into generating offensive or harmful images. As it works through these problems, OpenAI will begin to make DALL-E 2 available to a wider group of people. OpenAI is also releasing a user policy for DALL-E, which forbids asking the AI to generate offensive images—no violence or pornography—and no political images. To prevent deep fakes, users will not be allowed to ask DALL-E to generate images of real people. "A bowl of soup that looks like a monster, knitted out of wool" / "A shibu inu dog wearing a beret and black turtleneck" As well as the user policy, OpenAI has removed certain types of image from DALL-E 2’s training data, including those showing graphic violence. OpenAI also says it will pay human moderators to review every image generated on its platform. “Our main aim here is to just get a lot of feedback for the system before we start sharing it more broadly,” says Prafulla Dhariwal at OpenAI. “I hope eventually it will be available, so that developers can build apps on top of it.” Creative intelligence Multiskilled AIs that can view the world and work with concepts across multiple modalities—like language and vision—are a step towards more general-purpose intelligence. DALL-E 2 is one of the best examples yet. But while Etzioni is impressed with the images that DALL-E 2 produces, he is cautious about what this means for the overall progress of AI. “This kind of improvement isn’t bringing us any closer to AGI,” he says. “We already know that AI is remarkably capable at solving narrow tasks using deep learning. But it is still humans who formulate these tasks and give deep learning its marching orders.” For Mark Riedl, an AI researcher at Georgia Tech in Atlanta, creativity is a good way to measure intelligence. Unlike the Turing test, which requires a machine to fool a human through conversation, Riedl’s Lovelace 2.0 test judges a machine’s intelligence according to how well it responds to requests to create something, such as “A picture of a penguin in a spacesuit on Mars.” DALL-E scores well on this test. But intelligence is a sliding scale. As we build better and better machines, our tests for intelligence need to adapt. Many chatbots are now very good at mimicking human conversation, passing the Turing test in a narrow sense. They are still mindless, however. But ideas about what we mean by “create” and “understand” change too, says Riedl. “These terms are ill-defined and subject to debate.” A bee understands the significance of yellow because it acts on that information, for example. “If we define understanding as human understanding, then AI systems are very far off,” says Riedl. “But I would also argue that these art-generation systems have some basic understanding that overlaps with human understanding,” he says. “They can put a tutu on a radish in the same place that a human would put one.” Like the bee, DALL-E 2 acts on information, producing images that meet human expectations. AIs like DALL-E push us to think about these questions and what we mean by these terms. OpenAI is clear about where it stands. “Our aim is to create general intelligence,” says Dhariwal. “Building models like DALL-E 2 that connect vision and language is a crucial step in our larger goal of teaching machines to perceive the world the way humans do, and eventually developing AGI.”
    1 point
  3. You are right. It seems I got too old and my way of thinking is not that adaptable to new rules and paradigms as it used to. Thank you @Chester Featherbottom for the visual example.
    1 point
  4. There's quite a few things which jump out to me: The sun angle of the background image shows the sun is just a few degrees off the horizon, this means there should be long streaking shadows running all the way out of shot. Instead they're short and dumpy as if its almost midday and the sun is above us. the light angle is simply wrong for the shot you're compositing into. There's too much fill light on the left. That shadow should be almost black with direct sunlight, instead it is very well lit which make the unlit not look planted onto the floor Its leaning to the right. Consider rotating the camera to the right so the object is centered, the using film offset X in the camera to pish it back. Your metal is just too clean. Sheet metal comes off a roller, there would be some brushing or anisotropy effects on the surface. Or fingerprints, or corrosion. Galvanised stainless steel for industrial outdoor use is never that perfect. The background image choice isnt great. The entire right hand side of the image is brighter than the left, so this just makes all the lighting issues even worse as it looks like there's a bright haze of fog/sunlight behind the product. Did a nuclear bomb go off behind the product? Maybe. Quick PS fix, but I cant fix the short shadows:
    1 point
  5. Many properties in Blender are adjustable for multiple selected objects by holding down the ALT modifier key. But not all. Options in menus such as Object--> Shade Smooth work on all selected objects without the need for the ALT modifier key. Applying sub-d to all selected objects works if the CTRL-[12345] key combo is used. But in other cases, like materials or modifier stacks, the data stack must be linked via the active object method. (CTRL-L) This is one of the last legacy design implementation limitations in how Blender deals with data internally. The developers are aware of this, and the ALT option is considered a work-around at best, or a patchy implementation at worst. But due to Blender's core workings, this seemingly simple thing is not that simple to implement elegantly - it would require at least part of a core rewrite. So, not ideal or elegant, but workable at the moment. The Link/Transfer data model (CTRL-L) does have its advantages as well, although in this particular case (changing properties for multiple selected objects) it's a bit... awkward here and there. That is simple to solve: hover over one of the gizmo tools in the left button toolbar, and either assign it to your quick favorites menu and/or add a shortcut to the tool in question. It is possible to replace the standard G, S, and R behaviour by assigning the G, S, and R keys to the respective three transformation tools. If you do, Blender will pretty much behave like other DCCs out there.
    1 point
  6. @muhchris The snap does work fine as long as you move your axis to the correct location, but there are some alternative approaches that may be helpful depending on your goal. When you want to clone something between 2 given positions an alternative is make a starting object and an end object with axes that can be placed in specific locations and then clone between them in blend mode. If you know you need stairs "with exactly a 8" rise and a 10 run" it's better to create them how you have them, but if you're doing R&D and and just trying to get a sense of how many stairs "look right" a blend might be helpful.
    1 point
  7. This is brilliant. Thanks for the clear explanation.
    1 point
  8. Deeply impressive results. I’ve been using AI for limited tasks - mostly generating portraits and occasional ideation - for a couple of years now. Some of the results have been good enough to use in my books, but just barely. And not very reliably. DALL-E 2 looks like a significant step forward in quality. I just joined the waiting list, so I might be able to leave a report here eventually.
    1 point
  9. The file opens fine. Will try to figure out how you did this. Thanks you!
    1 point
  10. About the movie: Genre : SciFi, Drama Synopsis: A complex saga of humans scattered on planets throughout the galaxy all living under the rule of the Galactic Empire. Far in the future, The Empire is about to face a reckoning unlike anything else it's faced before: several Millenia of chaos has been predicted by the galaxy's leading psycho-historian: Hari Seldon. But can The Empire offset the disaster before it begins? Top Cast: Lou Llobell as Gaal Dornick Jared Harris as Hari Seldon Lee Pace as Brother Day Leah Harvey as Salvor Hardin Sasha Behar as Mari Laura Birn as Demerzel Terrence Mann as Brother Dusk Trivia: In the original Foundation Trilogy's universe, and also the immediate sequel Foundation's Edge, robots were unknown. In three subsequent novels, Asimov retconned this and merged his Robot and Foundation universes. This in turn allowed robots to appear in two Foundation prequels written by Asimov. The presence of a robotic character in the series tells us that this adaptation is not simply based the original trilogy, but the unified Universe created by Asimov. VFX: An additional 19 vendors had to be brought in to support DNEG, Important Looking Pirates and Outpost VFX as the visual effects shots for the ten episodes of Season 1 went from 1,500 to 3,900. “I realised early on that we needed a second visual effects supervisor because there was such an overlap between physical production and post-production, and the different time zones,” explains Kathy Chasen-Hay, senior VP VFX, Skydance. Each set was designed in three dimensions under the direction of production designer Rory Cheyne. “We then had to reverse engineer those models to turn them into something which was buildable by guys in a workshop,” states Conor Dennison, who was a supervising art director along with William Cheng and Adorjan Portik. “We set up Unreal Engine and a whole VR department within the art department. We would literally give David Goyer, the directors, producers and executives from Apple a set of VR goggles and say, ‘Off you go. Walk around the set. See what you think.’ It was fully lit and textured.” Rhinoceros 3D became the design software of choice. “The learning curve was stiffer but what you could do with the models afterwards was phenomenal.” Art: The intro of each episode is very impressive featuring elements and concepts of the movie in the form of a future painting medium that resembles a glitter fero-fluid. Displaying the history of the Empire in the Imperial Palace is the Mural of Souls, which is made of moving colour pigments. “We tried an interesting approach which was putting a bunch of acrylic ink in a pan, using Ferrofluid and running a magnet underneath it; that was filmed at a high speed which gave us a cool look, but it would have been impractical to have the mural wet all of the time,” notes MacLean. “Then we came up with the samsara where the Tibetan monks make mandala out of sand and wipe it away. We said, ‘What if we take that and turn it up to 11. Take the magnets from the Ferrofluid and have the sand be magnetic. The magnetic sand stays on the wall, twirls and makes these crazy images.” Simulations were placed on top of the physical mural created by the art department. “There was depth given to the various key features on the mural so depending on what was actually there, there was a custom particle layout, motion paths and noise fields,” explains Enriquez. “It was a lot of back-and-forth testing, and once we got it to work, the effect went throughout the entire shot. Kudos to Mackevision for getting that going.” Scar of Trator: In the first episode there is a terrorist attack on the largest space hub of the planet which connects with the planet through a huge elevator pillar like an umbilical cord. The destruction of the space hub results in a huge collapse of the pillar resulting in the death of millions. In order to have more control over the devastation caused by suicide bombers, Rodeo FX was recruited to produce a CG version. “Trantor is basically a city like Shanghai or Hong Kong in its architectural density, but that spreads over the full planet surface, then layered in depth of about 200 city levels of approximately the same urban density,” explains Arnaud Brisebois, VFX supervisor, Rodeo FX. “The task was to basically tear a giant gash in it. We would get a collection of building builds from DNEG. For each of those buildings, we generated five destruction variants, from barely wounded to completely collapsed. We generated shading attributes through FX’s destroyed areas which would be picked up in look development to give various areas of each building’s destruction its own age, or effect from destruction either partial or full. We built a megafloors [city levels] collection the same way. We spent a lot of time and effort working on single assets, making them highly detailed. You could drop a camera anywhere, near or far and it would still look awesome. I remember David Goyer’s comments coming in after seeing our early assemblies of the shots. ‘That's AWESOME! Like a spaceship Hindenburg!’ It’s the type of tone we’d get from David. Always excited, positive and fun to read.” The Conjecture: The device known as the Prime Radiant displays lifework of revolutionary mathematician Hari Seldon (Jared Harris). “What I kept saying to Chris and the team was, ‘We know that Hari Seldon and Gaal Dornick [Lou Llobell] are the only people that can understand this math, but we’re so far into the future I don’t want to see Arabic numbers,” remarks Goyer. “I also want it to be beautiful and spiritual. When Gaal and Hari look at the math it’s almost like they are communicating with angels or God.” The solution was found at the Toronto-based design house Tendril. “We went to Chris Bahry who in his spare time does quantum math,” states Chris MacLean, production VFX supervisor. “He came up with something that I hope becomes the ultimate sci-fi MacGuffin.” Holograms take the form of ‘Sandograms’. “The majority of our holograms are meant to be solid particles that coalesce into whatever the hologram was,” notes MacLean. “It worked extremely well with static objects and a 2.5D approach that Mike developed with DNEG.” Michael Enriquez, production VFX supervisor, adds, “I liked the concept of it not emitting light and being a physical object in the scene. The technology is so advanced that you don’t need an explanation for why it works.” The Monolith: Prominent on the surface of Terminus is a mysterious artifact that has a presence similar to that of the black monolith in 2001: A Space Odyssey. “Rory and I were like, ‘It has to be like Arrival’,” states MacLean. “It has to be this simple thing. I started to get into divine geometry and drew a simple diamond with a bit of a facet on it and mirrored it. I sent it to DNEG and they sent it back to us as a concept with this glowing centre. Rupert Sanders, the pilot director, got his designer Ash Thorp to do the same kind of thing. We sent it to Matt Cherniss at Apple to decide and he liked the diamond shape.” Even though the final design was decided in post-production, a partial practical element was constructed for principal photography. “We built the bottom six feet which was a little wedge shape with a light feature coming out of it; that was hanging off of a crane up in Iceland on the top of a mountain,” reveals Dennison. “Visual effects needed to have the effect of light for when Salvor Hardin [Leah Harvey] got up close to the Vault. The light naturally lit her up. Everything else was visual effects afterwards.” What's next ? If everything goes to plan for Goyer, Foundation will span 80 episodes over eight seasons. In order to do this, a number of narrative gaps had to be conceptualised. “The ‘Sack of Trantor’ is mentioned in the novels. But for a television show, audiences also want a certain amount of spectacle. We had to find a balance between visualising some of these events and staying rooted in story and character. Source: 3DWorld, Issue 279 Outposts' VFX Breakdown Personal notes: The series is kinda slow but interesting. The first episode is the one with the most heavy VFX scenes. Foundation reminds me of Dune, both of them have a luck of A.I. androids, many episodes involve the events unravelling in Terminus which is an arid planet like Dune and both plots involve characters of imperial heritage. Alien life is not too prevalent. We have the chance to witness some alien lifeforms but very briefly.
    1 point
  11. Thanks you all for replying so quickly. Shows how many different ways the same thing can be achieved and how little of C4D I know. You guys are the best!
    1 point
  12. I would just model that curve directly into the plane, and then use posemorph to push it up from flat. There is a good case for using SDS edge weighting here also. If that is your plan, topology will look something like this... Here I SDS weighted the centreline to 100% and increased SDS level to L5, which is the maximum you would need to get razor sharpness on that edge. CBR
    1 point
  13. That's a new level of XPressoing right there...
    1 point
  14. Oh. there's XP people here that will be all over this 🙂 Not me, but I shall enjoy watching from the side lines ! CBR
    1 point
  15. Did it ! Choose the weight tool Go to polygon mode Choose the Spine1 joint The viewport will look like this Now in the weight tool attributes change the following Now paint all over the bright orange area until it's completely black (the tool paints points so aim those and not the polygons) Done! That bone should not have any effect on the character. It should be used as a helper to move it (I don't know why the artist made two instead of one joints for that)
    1 point
  16. This is awesome! I definitely want to learn Xpresso, it's super powerful. Have you got any tutorials/courses you'd recommend for learning Xpresso?
    1 point
  17. Another vote for an Unreal section. My guess is that many people around here might be looking to use Unreal in a similar fashion to C4D/Blender/Maya as opposed to game development - but a huge majority on online resources/forums I can find assume you are a game dev. It would be nice to have a section friendly to our needs/goals. In fact I have a very basic UE5 texture scaling question I can't get an answer to anywhere (or at least the stuff I have found don't make sense coming from C4D).
    1 point
  18. I would also add to this good list of notes that the reflections in the metal have nothing to do with the roof surround. The tone of the roof tiles would tint and alter the look of the lower metal sides. Also check your perspective. Have you looked into shadow catchers at all or camera calibration? There are techniques for placing 3d objects into 2d images.
    1 point
  19. I would be extremely interested in an Unreal section. I have been playing with Unreal for the last couple of months and I'm very excited about its real-time render capabilities, but there is definitely a learning curve for C4D users. Changing the keyboard shortcuts and the middle mouse button function to emulate C4D's behavior was very helpful and I'm sure we can all share tips to help each other learn this beast of a program 🙂
    1 point
  20. this was a very professional presentation. I think the community centric setup, the huge free offerings and the urge to use all the complex tech to give the user a easy to use interface to super easily archive the most common tasks... I really think that is the way to go. I think also the old school 3d Programms need tools that just take care of the most needed tasks. I mean how many skys, cities or streets need to be setup all the time, and how complicated is is to get for example a sky with animated clouds. It is like a mussician that always has to build his instrument before he can make music.
    1 point
  21. In the meantime, the quickest way out of this is to switch from the 1up view to the 4up view and then back again. This can typically be done by middle clicking in the viewport. Cheers. Edit: And I will also add you shouldn't worry about your file being corrupted.
    1 point
  22. I'm afraid that is known issue with M1 chips and that OS. Maxon are working to fix it as soon as possible, and it should be addressed in a later patch. As I understand it your file is not at risk of becoming corrupt due to this. CBR
    1 point
  23. I like Chester's way better than the constraints option, but having applied the constraints once, you don't need them after that, so you could just CStO the whole setup once done, and off them all in the process - no more scene / frame slow-down. CBR
    1 point
  24. Oh I know. That's why I built it this way. You can move the Planes anywhere you want because they're still live. You can trim the edges in seconds with a MoGraph Selection. This is the last one. https://www.dropbox.com/s/lvduxm0iid69vv3/Cloned_Terrain_Edges.c4d.zip?dl=0
    1 point
×
×
  • Create New...