-
Slattery Shepherd posted an update 4 days, 10 hours ago
Finally, we use low-level edge information to guide the saliency map generation, and the edge guidance fusion is able to identify saliency regions effectively. The proposed HFFNet has been extensively evaluated on five traditional benchmark datasets. The experimental results demonstrate that the proposed model is fairly effective in salient object detection compared with 10 state-of-the-art models under different evaluation indicators, and it is superior to most of the comparison models.This pictorial presents the development of a data sculpture, followed by our reflections inspired by Research through Design (RtD) and Dahlstedt’s process-based model of artistic creativity. We use the notion of negotiation between concept and material representation to reflect on the ideation, design process, production, and the exhibition of “Slave Voyages” – a set of data sculptures that depicts slave traffic from Africa to the American continent. The work was initially produced as an assignment on physicalization for the Design course at the Federal University of Rio de Janeiro. Our aim is to open discussion on material representation and negotiation in the creative process of data physicalization.Physical engagement with data necessarily influences the reflective process. However, the role of interactivity and narration are often overlooked when designing and analyzing personal data physicalizations. We introduce Narrative Physicalizations, everyday objects modified to support nuanced self-reflection through embodied engagement with personal data. Narrative physicalizations borrow from narrative visualizations, storytelling with graphs, and engagement with mundane artifacts from data-objects. Our research uses a participatory approach to research-through-design and includes two interdependent studies. In the first, personalized data physicalizations are developed for three individuals. In the second, we conduct a parallel autobiographical exploration of what constitutes personal data when using a Fitbit. Our work expands the landscape of data physicalization by introducing narrative physicalizations. It suggests an experience-centric view on data physicalization where people engage physically with their data in playful ways, making their body an active agent during the reflective process.This paper presents a simple yet effective algorithm for automatically transferring face colors in portrait videos. We extract the facial features and vectorize the faces in the input video using Poisson vector graphics, which encodes the low-frequency colors as the boundary colors of diffusion curves. Then we transfer the face color of a reference image/video to the first frame of the input video by applying optimal mass transport between the boundary colors of diffusion curves. Next the boundary color of the first frame is transferred to the subsequent frames by matching the curves. Finally, we render the video using an efficient random-access Poisson solver. Thanks to our efficient diffusion curve matching algorithm, transferring colors for the vectorized video takes less than 1 millisecond per frame. Our method is particularly desired for frequent transfer from multiple references due to its information reuse nature. Since our method does not require correspondence between the reference image/video and the input video, it is flexible and robust to handle faces with significantly different geometries and postures, which often pose challenges to the existing methods. We demonstrate the efficacy of our method on image-to-video transfer and color swap in video.Graph convolutional networks (GCNs) have well-documented performance in various graph learning tasks, but their analysis is still at its infancy. Graph scattering transforms (GSTs) offer training-free deep GCN models, and are amenable to generalization and stability analyses. The price paid by GSTs is exponential complexity that increases with the number of layers. This discourages deployment of GSTs when a deep architecture is needed. The present work addresses the complexity limitation of GSTs by introducing an efficient so-termed pruned (p)GST approach. The resultant pruning algorithm is guided by a graph-spectrum-inspired criterion, and retains informative scattering features on-the-fly while bypassing the exponential complexity associated with GSTs. Stability of the novel pGSTs is established when the input graph data or the network structure are perturbed. Furthermore, the sensitivity of pGST to random and localized signal perturbations is investigated analytically and experimentally. Numerical tests showcase that pGST performs comparably to the baseline GST at considerable computational savings. Furthermore, pGST achieves comparable performance to state-of-the-art GCNs in graph and 3D point cloud classification tasks. SR-0813 concentration Upon analyzing the pGST pruning patterns, graph data in different domains call for different network architectures, and the pruning algorithm may guide the design choices for contemporary GCNs.Computational surface modeling that underlies material recognition has transitioned from reflectance modeling using in-lab controlled radiometric measurements to image-based representations based on internet-mined single-view images captured in the scene. We take a middle-ground approach for material recognition that takes advantage of both rich radiometric cues and flexible image capture. A key concept is differential angular imaging, where small angular variations in image capture enables angular-gradient features for an enhanced appearance representation that improves recognition. We build a large-scale material database, Ground Terrain in Outdoor Scenes (GTOS) database, to support ground terrain recognition for applications such as autonomous driving and robot navigation. The database consists of over 30,000 images covering 40 classes of outdoor ground terrain under varying weather and lighting conditions. We develop a novel approach for material recognition called texture-encoded angular network (TEAN) that combines deep encoding pooling of RGB information and differential angular images for angular-gradient features to fully leverage this large dataset. With this novel network architecture, we extract characteristics of materials encoded in the angular and spatial gradients of their appearance. Our results show that TEAN achieves recognition performance that surpasses single view performance and standard (non-differential/large-angle sampling) multiview performance.