²¼µ¤ÎÄ̤ꡢÂè9²ó Ãμ±¡¦·Ý½Ñ¡¦Ê¸²½¾ðÊó³Ø¸¦µæ²ñ¤ò¼Â»Ü¤·¤Þ¤¹¤Î¤Ç¡¢Ê³¤Ã¤Æ¤´»²²Ã¤¯¤À¤µ¤¤¡£
1·î28Æü¹¹¿·¡§È¯É½6¤òÄɲä·¡¢¼õÉÕ³«»Ï¡¦³«²ñ°§»¢¡¦È¯É½1¡Áȯɽ5¤ò30ʬ¤º¤ÄÁá¤á¤Þ¤·¤¿¡£
13:00 ¼õÉÕ³«»Ï 13:30 ³«²ñ°§»¢ 13:35 ȯɽ1¡Ö¼Ì·Ð·¿³Ø½¬¤Ë´ð¤Å¤¯C¸À¸ì³Ø½¬»Ù±ç¥¤¥ó¥¿¥Õ¥§¡¼¥¹¤Î¹½ÃÛ¡× ¾®¹â¡¡¿¿ÂÀϺ¡Êϲλ³Âç³ØÂç³Ø±¡¥·¥¹¥Æ¥à¹©³Ø¸¦µæ²Ê¡Ë 14:00 ȯɽ2¡Ö¾ðÊó¥ê¥Æ¥é¥·¡¼¤òÂоݤȤ·¤¿¥Æ¥¹¥È¤Î¼Â»Ü¤ÈÄ´ºº¡× Ìîùõ¡¡¿ò¹°¡Êϲλ³Âç³ØÂç³Ø±¡¥·¥¹¥Æ¥à¹©³Ø¸¦µæ²Ê¡Ë 14:25 µÙ·Æ 14:40 ȯɽ3¡ÖÆüËÜÅÁÅý²»³Ú¤Î¥Ç¥¸¥¿¥ë¥¢¡¼¥«¥¤¥Ö¤Î¤¿¤á¤Î»æ¹Ð¶× ¥·¥ó¥»¥µ¥¤¥¶¤Î³«È¯¡× Ê¡¿¹¡¡Î´´²¡ÊΩ̿´ÛÂç³Ø ¾ðÊóÍý¹©³ØÉô¡Ë 15:05 ȯɽ4¡Ö¥¤¥ó¥¿¡¼¥Í¥Ã¥È¥¢¡¼¥È¤Î¥¢¡¼¥«¥¤¥Ö²½¤Ë¤ª¤±¤ë²ÝÂê ¡½»²²Ã·¿¥¢¡¼¥È¤ÎµÄÏÀ¤«¤é¡× ¿¹¡¡·ÉÍΡÊΩ̿´ÛÂç³ØÂç³Ø±¡¡¡ÀèüÁí¹ç³Ø½Ñ¸¦µæ²Ê¡Ë 15:30 µÙ·Æ 15:45 ȯɽ5¡Ö¼«¼£ÂλËÅù¤ÎÃÏ°è»ñÎÁ¤Î¥Ç¥¸¥¿¥ë²½¤Î¿ÊŸ¤È¥ª¡¼¥×¥ó²½ ¡½²£ÉͻԤλöÎ㤫¤é¡½¡× ĹÄÍ¡¡Î´¡ÊÄḫÂç³Ø̾ÍÀ¶µ¼ø¡Ë 16:10 ȯɽ6¡ÖA comparative report of deep features and human perception¡× Wei Zhenao (College of Information Science and Engineering, Ritsumeikan University) 16:35 ÊIJñ°§»¢ 17:00 º©¿Æ²ñ
C¸À¸ì¤ò³Ø½¬¤¹¤ë½é³Ø¼Ô¤ËÂФ·»Ù±ç¤ò¹Ô¤¦¤¿¤á¤Î¥¤¥ó¥¿¥Õ¥§¡¼¥¹¤òWeb¥¢¥×¥ê¥±¡¼¥·¥ç¥ó¤È¤·¤Æ¹½ÃÛ¤·¤¿¡¥forʸ¤òÃæ¿´¤È¤·¤Æ·«¤êÊÖ¤·¤Ë´Ø¤¹¤ë1¹Ô¤«¤é¿ô¹Ô¤Î¥¿¥¤¥Ô¥ó¥°ÌäÂê¤òÄ󶡤¹¤ë¤â¤Î¤Ç¤¢¤ê¡¤²òÅú¾ðÊó¤Ï¥Ç¡¼¥¿¥Ù¡¼¥¹¤ËÊݸ¤¹¤ë¡¥Âç³Ø¤Î¥×¥í¥°¥é¥ß¥ó¥°ÆþÌç²ÊÌܤǻÈÍѤ·¤Æ¤â¤é¤¤¡¤»ö¸å¥Æ¥¹¥È¤È¹ç¤ï¤»¤¿Ê¬ÀϤò¼Â»Ü¤·¤¿¡¥¤µ¤é¤Ê¤ë²þÁ±¤ò¹Ô¤¤¤Ê¤¬¤é½é³Ø¼Ô¤Ø¤Î³Ø½¬¸ú²Ì¤Ê¤é¤Ó¤Ë¶µ°é¼Ô¤Ø¤Î¥Õ¥£¡¼¥É¥Ð¥Ã¥¯¤ò¿Þ¤ë¤â¤Î¤Ë¤·¤Æ¤¤¤¤¿¤¤¡¥
ϲλ³Âç³Ø¥·¥¹¥Æ¥à¹©³ØÉô¤Î1ǯÀ¸¸þ¤±¾ðÊó½èÍý¶µ°é²ÊÌܤòÂоݤȤ·¡¤³Ø½¬ÆâÍƤθ«Ä¾¤·¤È¿¶¤êÊÖ¤ê¤ò°Õ¿Þ¤·¤¿¾ðÊó¥ê¥Æ¥é¥·¡¼¤ÎÍý²òÅ٥ƥ¹¥È¤Ë¤Ä¤¤¤Æ¡¤²þÎɤ·¤¿ÌäÂꥻ¥Ã¥È¤ò¼ø¶È¤Ç²òÅú¤·¤Æ¤â¤é¤¤¡¤¹àÌÜÈ¿±þÍýÏÀ¤Ë´ð¤Å¤¤¤Æ¼±ÊÌÎÏ¡¦º¤ÆñÅÙ¡¦Ç½ÎÏÃͤλ»½Ð¡¤¤ª¤è¤ÓÅù²½¤Ë¤è¤ëºòǯÅ٤ȤÎÈæ³Ó¤ò»î¤ß¤¿¡¥¤Þ¤¿¡¤Èó¾ðÊó·Ï³ØÀ¸¤Ë¤âÍÍѤÊÃμ±ÂηϤÎÀ°È÷¤äÍý²òÅÙ³Îǧ¤Î¼êÃʤ¬É¬Íפȹͤ¨¡¤³Ø³°¤Î¾õ¶·¤òÄ´ºº¤·¤¿¡¥
ÆüËÜ̵·Áʸ²½ºâ¤ò¥Ç¥¸¥¿¥ë¥¢¡¼¥«¥¤¥Ö¤¹¤ë¤³¤È¤Ç¡¤¾ì½ê¤ä»þ´Ö¤òÁª¤Ð¤º¤Ëï¤â¤¬ÍưפËʸ²½ºâ¤òÂ賤¹¤ë¤³¤È¤¬²Äǽ¤È¤Ê¤ë¡¥Ëܸ¦µæ¤Ç¤Ï¡¤ÌÀ¼£»þÂå¤ËÉáµÚ¤·¤¿³Ú´ï¤Î£±¤Ä¤Ç¤¢¤ë»æ¹Ð¶×¤ËÃåÌܤ·¤Ê¤¬¤é¡¤ÆüËÜÅÁÅý²»³Ú¤Î¥Ç¥¸¥¿¥ë¥¢¡¼¥«¥¤¥Ö¤Ë¼è¤êÁȤó¤À¡¥¶ñÂÎŪ¤Ë¤Ï¡¤»æ¹Ð¶×¤Î²»¶ÁÁÇÊҡʱéÁÕ²»¤ä¥Ï¥ó¥É¥ë¤Î²óž²»¤Ê¤É¡Ë¤ò¿¿ô¼ýÏ¿¤·¡¤¤³¤ì¤é¤ÎÁÇÊÒ¤òÁȤ߹ç¤ï¤»¤Æ»æ¹Ð¶×¤¬ÁդǤ벻¤ò¥Ç¥¸¥¿¥ë¾å¤ÇºÆ¸½¤¹¤ë»æ¹Ð¶×¥·¥ó¥»¥µ¥¤¥¶¤ò³«È¯¤·¤¿¡¥
¥á¥Ç¥£¥¢¥¢¡¼¥È¤«¤éȯÀ¸¤·¤¿¥¤¥ó¥¿¡¼¥Í¥Ã¥È¥¢¡¼¥È¤ÏRhizome.org¤äʸ²½Ä£¤Ë¤è¤Ã¤Æ¥¢¡¼¥«¥¤¥Ö²½¤¬¿Ê¤á¤é¤ì¤Æ¤¤¤ë¤¬¡¢Êݸ¤¬º¤Æñ¤ÊºîÉʤ⸺ߤ¹¤ë¡£¤½¤ÎÂåɽ¤¬ÉÔÆÃÄê¿¿ô¤Î¥æ¡¼¥¶¡¼¤¬»²²Ã¤¹¤ëÀ©ºî¥×¥í¥»¥¹¤ä¤½¤ÎºÝ¤Î¥³¥ß¥å¥Ë¥±¡¼¥·¥ç¥ó¼«ÂΤò¼çÂê¤È¤¹¤ëºîÉÊ·²¤Ç¤¢¤ë¡£ËÜȯɽ¤Ï¤½¤Î¤è¤¦¤Ê¥Í¥Ã¥È¥æ¡¼¥¶¡¼¤Î¡Ö»²²Ã¡×¤Ë¤è¤ë¥¤¥ó¥¿¡¼¥Í¥Ã¥È¥¢¡¼¥È¤ò¸½ÂåÈþ½ÑÈãɾ¤Ë¤ª¤±¤ë¡Ö»²²Ã·¿¥¢¡¼¥È¡×¤ÎµÄÏÀ¤ò±çÍѤ·¤Æ¹Í»¡¤¹¤ë¡£
¶áǯ, ¶¿Åڻˡ¦ÃÏÊý»Ë¤Ê¤É¼«¼£ÂλËÅù¤ÎÃÏ°è»ñÎÁ¤Î¥Ç¥¸¥¿¥ë²½¤¬¿ÊŸ¤·¤Æ¤¤¤ë¤¬, ¶ñÂÎŪ¤Ë¤É¤ÎÄøÅ٥ǥ¸¥¿¥ë²½¤µ¤ì, ¸ø³«¤µ¤ì¤Æ¤¤¤ë¤Î¤«¤òÇÄ°®¤¹¤ë¤Î¤ÏÆñ¤·¤¤. ËÜȯɽ¤Ç¤Ï, ¹ñΩ¹ñ²ñ¿Þ½ñ´Û¥µ¡¼¥Á¤Ê¤É¤Î¸¡º÷¥Ý¡¼¥¿¥ë¤ª¤è¤Ó¿ÀÆàÀΩ¿Þ½ñ´Û¤ä²£ÉÍ»ÔΩ¿Þ½ñ´Û¤Î¸¡º÷¥·¥¹¥Æ¥à¤ò»ÈÍѤ·¼«¼£ÂλËÅù¤Î¥Ç¥¸¥¿¥ë²½¤Î¿ÊŸ¤ÎÄøÅÙ¤ò¿ä¬¤·¤¿. ¤Þ¤¿, ¥Ç¥¸¥¿¥ë²½¤µ¤ì¤¿¼«¼£ÂλËÅù¤ÎÃÏ°è»ñÎÁ¥ª¡¼¥×¥ó²½¤Î²ÝÂê¤Ë¤Ä¤¤¤Æ¸¡Æ¤¤·¤¿.
We discuss differences between the similarity calculated from the deep features extracted from images and the similarity perceived by human beings. This study is based on our previous findings on the classification of deep features in images. The correlation between feature graphs of deep learning networks can effectively describe the image features [1]. However, no studies have proved that the features of images learned by deep learning networks are consistent with those of images seen by humans. Previous studies have defined boundaries for subjects and only obtained a local optimal solution [2]. In short, subjects were asked to choose their favorite object from a list of objects, say, A, B and C; and suppose object B was the answer selected by most of the subjects; However, object D, not in the list, would actually have been the most popular if it had been placed in the list. Through an unrestricted subjective experiment, the image features are reevaluated.