CORNIA, MARCELLA
 Distribuzione geografica
Continente #
NA - Nord America 9.513
AS - Asia 9.313
EU - Europa 8.059
SA - Sud America 886
AF - Africa 179
OC - Oceania 51
Continente sconosciuto - Info sul continente non disponibili 16
Totale 28.017
Nazione #
US - Stati Uniti d'America 9.211
IT - Italia 3.873
SG - Singapore 2.637
CN - Cina 2.294
HK - Hong Kong 1.054
GB - Regno Unito 977
TR - Turchia 802
VN - Vietnam 792
DE - Germania 685
BR - Brasile 656
FR - Francia 456
SE - Svezia 431
KR - Corea 419
JP - Giappone 284
FI - Finlandia 265
RU - Federazione Russa 252
NL - Olanda 231
IN - India 192
CA - Canada 171
ID - Indonesia 162
IE - Irlanda 135
ES - Italia 124
TW - Taiwan 119
BD - Bangladesh 113
UA - Ucraina 99
MX - Messico 95
AR - Argentina 82
AT - Austria 82
IQ - Iraq 61
BE - Belgio 59
CH - Svizzera 58
PL - Polonia 57
BG - Bulgaria 50
ZA - Sudafrica 50
AU - Australia 43
MY - Malesia 43
PK - Pakistan 38
AE - Emirati Arabi Uniti 37
LT - Lituania 37
SA - Arabia Saudita 37
RO - Romania 34
PT - Portogallo 33
EC - Ecuador 31
DK - Danimarca 30
IL - Israele 29
PE - Perù 25
EG - Egitto 24
KE - Kenya 24
CL - Cile 23
VE - Venezuela 23
CO - Colombia 22
MA - Marocco 20
JO - Giordania 19
UZ - Uzbekistan 19
PH - Filippine 18
GR - Grecia 17
DZ - Algeria 16
TH - Thailandia 16
EU - Europa 15
NP - Nepal 15
TN - Tunisia 15
KZ - Kazakistan 14
CZ - Repubblica Ceca 13
PY - Paraguay 12
IR - Iran 11
OM - Oman 11
AZ - Azerbaigian 10
LU - Lussemburgo 10
BZ - Belize 8
ET - Etiopia 8
MO - Macao, regione amministrativa speciale della Cina 8
RS - Serbia 8
AM - Armenia 7
KH - Cambogia 7
NZ - Nuova Zelanda 7
SC - Seychelles 7
UY - Uruguay 7
SK - Slovacchia (Repubblica Slovacca) 6
AL - Albania 5
BA - Bosnia-Erzegovina 5
DO - Repubblica Dominicana 5
HR - Croazia 5
LB - Libano 5
SY - Repubblica araba siriana 5
BH - Bahrain 4
BO - Bolivia 4
GE - Georgia 4
GT - Guatemala 4
JM - Giamaica 4
KG - Kirghizistan 4
KW - Kuwait 4
LK - Sri Lanka 4
LV - Lettonia 4
MD - Moldavia 4
MT - Malta 4
PS - Palestinian Territory 4
BB - Barbados 3
BY - Bielorussia 3
CR - Costa Rica 3
HU - Ungheria 3
Totale 27.975
Città #
Singapore 1.648
Santa Clara 992
Ashburn 928
Hong Kong 859
Elâzığ 718
Hefei 715
Fairfield 661
Modena 610
San Jose 510
Chandler 505
Southend 436
Beijing 349
Seattle 316
Houston 295
Seoul 283
Woodbridge 280
Milan 264
Bologna 261
Ho Chi Minh City 253
London 246
Los Angeles 245
Cambridge 237
Wilmington 233
Nyköping 227
Ann Arbor 214
Helsinki 174
Hanoi 171
Buffalo 159
Rome 141
Chicago 132
Tokyo 131
New York 125
Jakarta 121
Dearborn 116
Boardman 115
The Dalles 114
Reggio Emilia 113
Dublin 107
Jacksonville 105
Lauterbourg 96
Parma 96
Council Bluffs 92
San Diego 88
Shanghai 84
Munich 82
Florence 76
São Paulo 76
Amsterdam 69
Nuremberg 69
Frankfurt am Main 67
Princeton 57
Taipei 57
Turin 57
Orem 54
Redwood City 54
Bomporto 52
Montreal 51
Mexico City 48
Sofia 48
Bremen 46
Moscow 46
Salt Lake City 46
Dallas 45
Kent 45
Pisa 43
Paris 42
Da Nang 41
Falkenstein 41
Phoenix 39
Brussels 38
Chennai 37
Naples 37
Warsaw 37
Palermo 36
Vienna 36
Dong Ket 35
Toronto 35
Izmir 33
Eugene 31
Guangzhou 30
Haiphong 29
Düsseldorf 28
Formigine 28
Johannesburg 28
Lappeenranta 28
Zurich 27
Dhaka 26
Falls Church 26
Manchester 26
Seo-gu 26
Copenhagen 25
Hangzhou 25
Ottawa 25
Bari 24
Biên Hòa 23
Central 23
Fremont 23
Nairobi 23
Tampa 23
Baghdad 22
Totale 16.809
Nome #
What was Monet seeing while painting? Translating artworks to photo-realistic images 643
Attentive Models in Vision: Computing Saliency Maps in the Deep Learning Era 563
MissRAG: Addressing the Missing Modality Challenge in Multimodal Large Language Models 562
Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach 546
Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models 504
Attentive Models in Vision: Computing Saliency Maps in the Deep Learning Era 496
Towards Cycle-Consistent Models for Text and Image Retrieval 491
Artpedia: A New Visual-Semantic Dataset with Visual and Contextual Sentences in the Artistic Domain 473
Modeling Multimodal Cues in a Deep Learning-based Framework for Emotion Recognition in the Wild 463
Automatic Image Cropping and Selection using Saliency: an Application to Historical Manuscripts 442
Imparare a descrivere gli oggetti salienti presenti nelle immagini tramite la visione e il linguaggio 417
Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions 416
Aligning Text and Document Illustrations: towards Visually Explainable Digital Humanities 402
Learning to Read L'Infinito: Handwritten Text Recognition with Synthetic Training Data 402
M-VAD Names: a Dataset for Video Captioning with Naming 399
Dress Code: High-Resolution Multi-Category Virtual Try-On 394
Explaining Digital Humanities by Aligning Images and Textual Descriptions 393
A Deep Multi-Level Network for Saliency Prediction 391
Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model 384
FashionSearch++: Improving Consumer-to-Shop Clothes Retrieval with Hard Negatives 381
Recognizing social relationships from an egocentric vision perspective 377
Unveiling the Impact of Image Transformations on Deepfake Detection: An Experimental Analysis 358
Image-to-Image Translation to Unfold the Reality of Artworks: an Empirical Analysis 357
Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation 357
Dual-Branch Collaborative Transformer for Virtual Try-On 351
Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation 350
Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs 340
The Revolution of Multimodal Large Language Models: A Survey 339
SAM: Pushing the Limits of Saliency Prediction Models 339
Benchmarking BERT-based Models for Latin: A Case Study on Biblical References in Ancient Christian Literature 334
Multi-Level Net: a Visual Saliency Prediction Model 331
Visual Saliency for Image Captioning in New Multimedia Services 330
Dress Code: High-Resolution Multi-Category Virtual Try-On 330
Transform, Warp, and Dress: A New Transformation-Guided Model for Virtual Try-On 325
SynthCap: Augmenting Transformers with Synthetic Data for Image Captioning 320
Explore and Explain: Self-supervised Navigation and Recounting 314
From Show to Tell: A Survey on Deep Learning-based Image Captioning 314
A Novel Attention-based Aggregation Function to Combine Vision and Language 312
Multimodal Attention Networks for Low-Level Vision-and-Language Navigation 302
Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention 300
Towards Video Captioning with Naming: a Novel Dataset and a Multi-Modal Approach 297
Meshed-Memory Transformer for Image Captioning 290
CaMEL: Mean Teacher Learning for Image Captioning 282
VITON-GT: An Image-based Virtual Try-On Model with Geometric Transformations 280
Embodied Agents for Efficient Exploration and Smart Scene Description 277
Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities 272
A Unified Cycle-Consistent Neural Model for Text and Image Retrieval 264
Retrieval-Augmented Transformer for Image Captioning 259
Boosting Modern and Historical Handwritten Text Recognition with Deformable Convolutions 256
Investigating Bidimensional Downsampling in Vision Transformer Models 256
Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval 249
Adapt to Scarcity: Few-Shot Deepfake Detection via Low-Rank Adaptation 245
Learning to Select: A Fully Attentive Approach for Novel Object Captioning 238
Focus on Impact: Indoor Exploration with Intrinsic Motivation 237
Fashion-RAG: Multimodal Fashion Image Editing via Retrieval-Augmented Generation 234
Semantically Conditioned Prompts for Visual Recognition under Missing Modality Scenarios 233
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis 233
Embodied Navigation at the Art Gallery 231
The Unreasonable Effectiveness of CLIP features for Image Captioning: an Experimental Analysis 230
Modeling Human Gaze Behavior with Diffusion Models for Unified Scanpath Prediction 225
Are Learnable Prompts the Right Way of Prompting? Adapting Vision-and-Language Models with Memory Optimization 222
BRIDGE: Bridging Gaps in Image Captioning Evaluation with Stronger Visual Cues 220
OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data 219
Matching Faces and Attributes Between the Artistic and the Real Domain: the PersonArt Approach 217
ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval 216
With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning 210
The LAM Dataset: A Novel Benchmark for Line-Level Handwritten Text Recognition 210
Towards Explainable Navigation and Recounting 207
Working Memory Connections for LSTM 207
LaDI-VTON: Latent Diffusion Textual-Inversion Enhanced Virtual Try-On 204
Fashion-Oriented Image Captioning with External Knowledge Retrieval and Fully Attentive Gates 200
TPP-Gaze: Modelling Gaze Dynamics in Space and Time with Neural Temporal Point Processes 195
Personalizing Multimodal Large Language Models for Image Captioning: An Experimental Analysis 189
Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation 187
Spot the Difference: A Novel Task for Embodied Agents in Changing Environments 187
Unlearning Vision Transformers without Retaining Data via Low-Rank Decompositions 186
Verifier Matters: Enhancing Inference-Time Scaling for Video Diffusion Models 184
Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering 177
SMArT: Training Shallow Memory-aware Transformers for Robotic Explainability 177
Personalized Instance-based Navigation Toward User-Specific Objects in Realistic Environments 176
Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing 176
Video Surveillance and Privacy: A Solvable Paradox? 171
Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization 170
Fluent and Accurate Image Captioning with a Self-Trained Reward Model 165
Out of the Box: Embodied Navigation in the Real World 164
Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images 163
Explaining Transformer-based Image Captioning Models: An Empirical Analysis 163
Trends, Applications, and Challenges in Human Attention Modelling 158
Mitigating Hallucinations in Multimodal LLMs via Object-aware Preference Optimization 149
Computer Vision in Human Analysis: From Face and Body to Clothes 146
Towards Retrieval-Augmented Architectures for Image Captioning 145
Generating More Pertinent Captions by Leveraging Semantics and Style on Multi-Source Datasets 138
Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images 137
Augmenting and Mixing Transformers with Synthetic Data for Image Captioning 130
Multi-Class Unlearning for Image Classification via Weight Filtering 129
Sketch2Stitch: GANs for Abstract Sketch-Based Dress Synthesis 122
Image Captioning Evaluation in the Age of Multimodal LLMs: Challenges and Future Perspectives 118
Learning to Mask and Permute Visual Tokens for Vision Transformer Pre-Training 116
What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models 111
Pixels of Faith: Exploiting Visual Saliency to Detect Religious Image Manipulation 108
Totale 27.799
Categoria #
all - tutte 91.786
article - articoli 0
book - libri 0
conference - conferenze 0
curatela - curatele 0
other - altro 0
patent - brevetti 0
selected - selezionate 0
volume - volumi 0
Totale 91.786


Totale Lug Ago Sett Ott Nov Dic Gen Feb Mar Apr Mag Giu
2020/2021569 0 0 0 0 0 0 0 0 0 296 121 152
2021/20222.415 117 119 124 122 84 107 155 187 244 324 575 257
2022/20232.147 269 223 186 179 244 187 85 168 304 64 134 104
2023/20241.987 254 148 216 215 270 104 97 118 61 175 135 194
2024/20256.595 626 211 216 378 948 682 328 538 745 585 624 714
2025/202610.945 1.065 613 979 1.185 1.574 676 1.325 1.270 996 1.262 0 0
Totale 28.505