Bug Localization とはバグの原因箇所を特定する,ソフトウェア保守において重要な作業である.メソッドレベルで自動でBug Localizationを行う手法は開発者にとって有用なものの,手法が少なく,評価可能なフレームワークも存在しないため知見が少ない.本論文では,既存の情報検索を用いた Bug Localization手法をメソッドレベルで大規模に比較可能なフレームワークFinerBench4BLを提案し,推薦モジュール粒度の違いが手法の精度や推薦時に考慮する追加情報,実行時間へ与える影響の調査を行うデータセットはリポジトリ変換によりBench4BLのプロジェクトをメソッドレベルに再構築したメソッドリポジトリから作成した.またメソッドリポジトリに基づき,既存のファイルレベルの手法を小さな修正でメソッドレベルに変更し,データセットと組み合わせて評価フレームワークとした.FinerBench4BLを利用した調査の結果,メソッドレベルの手法は精度が低下する一方で,デバッグに必要な労力が削減することが明らかになった.またメソッドレベルへの変更に伴い,既存手法が考慮する追加情報の影響が小さくなることから,多くの種類の追加情報の考慮が精度向上につながることがわかった.
Bug localization is an important aspect of software maintenance because it can locate modules that need to be changed to fix a specific bug. Although method-level bug localization is helpful for developers, there are only a few techniques, and there exists no large-scale framework for their evaluation. In this paper, we present FinerBench4BL, an evaluation framework for method-level information retrieval-based bug localization techniques, and investigate the impact of module granularities on the techniques' accuracy, additional information, and execution time. The dataset was constructed from a method repository where projects in Bench4BL were converted to the method level by repository transformation. In addition, based on the method repositories, we tailor the existing file-level bug localization technique implementations at the method level. By combining the generated dataset and implementations, we build a framework for method-level evaluation. We found that the change to method level reduces the influence of additional information considered by existing techniques and that considering much additional information leads to improved accuracy.
@article{tsumita-ipsjj202404,
author = {積田 静夏 and 天嵜 聡介 and 林 晋平},
title = {情報検索を用いたBug Localization手法にモジュール粒度の違いが与える影響},
journal = {情報処理学会論文誌},
volume = 65,
number = 4,
pages = {792--807},
year = 2024,
month = {apr},
}
[tsumita-ipsjj202404]: as a page
技術的負債は,ソフトウェア開発において開発コストを優先するために,最適でない解法を採用することを指しており,ソフトウェア進化を阻害する要因のひとつとして知られている.技術的負債の新しい点に,全ての進化阻害要因を解消すべき悪影響とみなすわけではなく,一定の負債を受け入れ悪影響とならないよう管理するものとして取り扱う点がある.本論文では,こういった達成阻害要因のとらえ方の発展も含め,これまでに議論されてきた技術的負債およびその周辺技術について概観し,技術的負債は設計,コードの品質に関するこれまでの技術とどのような関係にあるのかを解説する.
Technical debt refers to the adoption of suboptimal solutions in software development in order to prioritize development costs, and is known as one of the barrier factors that hinder software evolution. A new aspect in technical debt is that not all of the debts are regarded as negative e ects to be resolved, but rather certain debts are accepted and managed to prevent them from becoming negative e ects. This paper provides an overview of technical debt and related technologies, and explains how technical debt relates to previous technologies related to design and code quality.
@article{tkobaya-jssst202408,
author = {小林 隆志 and 林 晋平 and 斎藤 忍},
title = {技術的負債: ソフトウェア進化阻害要因の現在のとらえ方},
journal = {コンピュータソフトウェア},
volume = 41,
number = 3,
pages = {2--15},
year = 2024,
month = {aug},
}
[tkobaya-jssst202408]: as a page
Use case descriptions describe features consisting of multiple concepts with following a procedural flow. Because existing feature location techniques lack a relation between concepts in such features, it is difficult to identify the concepts in the source code with high accuracy. This paper presents a technique to locate concepts in a feature described in a use case description consisting of multiple use case steps using dependency between them. We regard each use case step as a description of a concept and apply an existing concept location technique to the descriptions of concepts and obtain lists of modules. Also, three types of dependencies: time, call, and data dependencies among use case steps are extracted based on their textual description. Modules in the obtained lists failing to match the dependency between concepts are filtered out. Thus, we can obtain more precise lists of modules. We have applied our technique to use case descriptions in a benchmark. Results show that our technique outperformed baseline setting without applying the filtering.
@article{hayashi-ieicet202405,
author = {Shinpei Hayashi and Teppei Kato and Motoshi Saeki},
title = {Locating Concepts on Use Case Steps in Source Code},
journal = {IEICE Transactions on Information and Systems},
volume = {107-D},
number = 5,
pages = {602--612},
year = 2024,
month = {may},
}
[hayashi-ieicet202405]: as a page
In the process of developing requirements specifications, a requirements analyst conducts question-and-answer (Q&A) sessions iteratively to incrementally make more complete initial requirements preobtained from stakeholders. However, iterated Q&A sessions often have some problems leading to a final requirements specification of lower quality. This paper presents the usage of a graph database system to identify bad smells in Q&A processes, which are symptoms leading to a lower quality product, and to control the versions of a list of requirements through the activities. In this system, the records of the Q&A activities and the requirements lists are structured and stored in a graph database Neo4j. Cypher, a database manipulation language, was used to show that we could retrieve bad smells in the Q&A process and visualize any version of the requirements list evolving through the processes.
@inproceedings{imahori-quatic2024,
author = {Yui Imahori and Junzo Kato and Shinpei Hayashi and Atsushi Ohnishi and Motoshi Saeki},
title = {Supporting {Q\&A} Processes in Requirements Elicitation: Bad Smell Detection and Version Control},
booktitle = {Quality of Information and Communications Technology: Proceedings of the 17th International Conference on the Quality of Information and Communications Technology},
pages = {253--268},
year = 2024,
month = {sep},
}
[imahori-quatic2024]: as a page
開発者は識別子を名前変更した際,同じ意図で命名された他の識別子も名前変更を行う必要があるが,労力がかかる.既存の名前変更推薦手法では,識別子間の関係性に基づく推薦の連鎖として名前変更を推薦するが,変更操作が適用できない識別子があれば以降の推薦を行わず,推薦漏れが生じる.本論文では,識別子間の関係性と類似性に基づき,候補に優先度を付与して名前変更推薦を行う手法RENAS-pを提案する.関係性がある識別子,用いられる語彙が類似する識別子は同時に名前変更される傾向が高いため,関係性に基づく優先度,類似性の優先度を定義し,これらが高いものを推薦する.RENAS-pを評価した結果,既存手法よりF値が0.1以上向上した.また,関係性のみや類似性のみを考慮した場合に比べて,双方を利用するRENAS-pが精度良く推薦を行えた.
@article{doi-sigse202403,
author = {土居 直樹 and 林 晋平},
title = {推薦の優先度に基づく識別子名一括変更支援},
journal = {情報処理学会研究報告},
volume = {2024-SE-216},
number = 3,
pages = {1--8},
year = 2024,
month = {mar},
}
[doi-sigse202403]: as a page
我々は要求シソーラスを用いた要求獲得手法THEOREEを開発してきたが,その獲得の精度を高めるために,制約概念を付加した要求シソーラスを提案する.提案シソーラスを図書館システムの要求獲得に適用して従来の要求シソーラスと比較評価した.
@article{jkato-sigse202407,
author = {加藤 潤三 and 林 晋平 and 海谷 治彦 and 大西 淳 and 佐伯 元司},
title = {制約概念を取り入れた要求獲得のためのシソーラス},
journal = {情報処理学会研究報告},
volume = {2024-SE-217},
number = 9,
pages = {1--8},
year = 2024,
month = {jul},
}
[jkato-sigse202407]: as a page
コードレビューにおいて,変更理解は時間が掛かるために,適切な行差分によるサポートが重要である.しかし,現在広く利用されている差分生成法は,開発者の意図した変更箇所や変更ハンクの識別に関して誤った差分を生成してしまうことがある.本論文では,この問題を解決する手法の一つである対話的差分修正法に着目した.この手法は利用者が手動で指摘を行うため正確な差分が得られるという点で有用であり,実用に向けた価値が存在する.一方,実用性のためのもう一つの観点である労力について未調査であった.本論文では対話的な差分修正のプロセスを探索問題として定式化し,そのプロセスを再現した.この手法を用いて対話的差分修正法を実証的に調査し,指摘回数や指摘による修正量を計測し,労力に関しての示唆を得た.
@article{tsukasa-sigss202407,
author = {八木 士 and 林 晋平},
title = {ソースコード差分の対話的修正における修正性能の実証的調査},
journal = {電子情報通信学会技術研究報告},
volume = 124,
number = 133,
pages = {37--42},
year = 2024,
month = {jul},
}
[tsukasa-sigss202407]: as a page
ソースコードの品質を向上させる手段として,リファクタリングがある.開発者はリファクタリング自動適用を用いてソースコードの品質を向上させるが,自動化がされているリファクタリングは,クラス粒度やメソッド粒度での操作が多く,リファクタリング自動適用によって文粒度でのソースコードの品質を向上させられない.文粒度でのソースコードの品質を向上させるリファクタリング操作である文移動リファクタリングが提案されているものの,事前条件と手順が曖昧で自動適用できない.そこで,依存関係解析に基づいてメソッドを横断する文移動リファクタリングの事前条件・手順を定式化した.そして,定式化した事前条件・手順を自動化した.文移動リファクタリングの適用可能範囲に関する評価を行い,文が適切な位置にある場合,操作ごとに34.2\%から76.4\%の文に対してメソッドを横断する文移動リファクタリングが適用可能であることを示した.また,リファクタリング自動適用に関する評価を行い,定式化した文移動リファクタリングが自動化可能であることを示した.
@article{yasuhara-sigss202407,
author = {安原 航汰 and 林 晋平},
title = {依存関係解析に基づく文移動リファクタリングの自動化},
journal = {電子情報通信学会技術研究報告},
volume = 124,
number = 133,
pages = {31--36},
year = 2024,
month = {jul},
}
[yasuhara-sigss202407]: as a page
Bug Localizationとは,バグの原因箇所を特定するソフトウェア保守において重要な作業である.メソッドレベルで自動でBug Localizationを行う手法は開発者にとって有用なものの,どのような情報が効果的かは十分に検討されていない.メソッドレベルのBug Localizationでは,各メソッドが推薦に利用可能なテキスト情報が少ないことから,精度が大きく低下することが課題となっている.本論文では,ファイルや近傍メソッドの情報を推薦に利用するメソッドレベル手法BLIFAを提案し,その評価を行った.メソッドレベルで不足する情報の補填戦略について,メソッドレベル評価フレームワークFinerBench4BL上で分析した結果,呼び出し先メソッドとファイルの情報を考慮した場合に精度向上がみられた.一方で,ファイルレベルの情報を考慮したことで精度が低下したプロジェクトも多く存在したことから,考慮する情報をプロジェクトやバグごとに選定する仕組みが必要であることがわかった.
@article{tsumita-sigss202403,
author = {積田 静夏 and 天嵜 聡介 and 林 晋平},
title = {異粒度情報の統合に基づく細粒度Bug Localization},
journal = {電子情報通信学会技術研究報告},
volume = 123,
number = 414,
pages = {150--155},
year = 2024,
month = {mar},
}
[tsumita-sigss202403]: as a page
ソースコード変更に含まれるリファクタリングを高い信頼性を確保して識別するためには目視でリファクタリングの識別を行う必要があるものの,大きな労力がかかる.本論文では対話的なリファクタリングの識別支援環境を提案する.提案する環境では,ソースコード変更中の追加・削除コードに共通するトークン列を検出して可視化し,その出現箇所の相互参照を提供することにより,コード片の移動を伴うリファクタリングの識別を支援する.また,識別済みの変更を適用した中間状態を作成する機能を提供することで元の変更をより原始的な複数の変更に分解させ,それぞれの変更の理解を容易にする.提案する識別支援機能をリファクタリングの事例データ作成環境に組み込み,被験者による評価を行った.その結果,回答時間は伸びたものの,複数の変更が混在する場合を含めより高精度なデータが得られ,また提案する機能が識別に貢献したことがわかった.
@article{ueno-sigss202403,
author = {上野 尊義 and 陳 磊 and 林 晋平},
title = {ソースコード変更に含まれるリファクタリングの識別環境の構築},
journal = {電子情報通信学会技術研究報告},
volume = 123,
number = 414,
pages = {73--78},
year = 2024,
month = {mar},
}
[ueno-sigss202403]: as a page
リファクタリングに関する実証的研究において,リファクタリング事例の収集は重要である.一方,既存の事例収集法は対象言語やリファクタリングの種類の制限,収集精度に課題があった.本論文ではリファクタリングコミットの自動収集可能性の検討として,Conventional Commits規約に従うコミット(CCコミット)の特徴を分析する.CCコミットはメッセージから容易に特定できるため,従来のメッセージ分析に基づく手法の精度の問題を軽減させたうえで,言語に依存せずに自動でリファクタリングコミットを収集することができる.分析の結果,CCコミットは言語に依らず年々増加する傾向にあった.また,リファクタリングと分類されたCCコミットは他の分類より多くのリファクタリング事例を含んでおり,信頼性が高いことが分かった.
@article{osera-sigse202403,
author = {大瀬良 龍誠 and 林 晋平},
title = {リファクタリングに注目したConventional Commitsの調査},
journal = {情報処理学会研究報告},
volume = {2024-SE-216},
number = 1,
pages = {1--8},
year = 2024,
month = {mar},
}
[osera-sigse202403]: as a page
ソフトウェア保守の効率向上のため,変更が起こるモジュールを予測する,変更予測手法が提案されている.既存手法では主にクラスレベル予測を行っているものの,メソッドレベルでの予測はより直接的に変更場所を把握できる.これによって開発者は細粒度の変更箇所を事前にリファクタリングしたり,変更量に応じた人的リソースの割り当てをより正確に行える.メソッドレベルの変更予測手法が提案されているものの,クラスレベル予測との比較は十分には行われていない.そこで本論文では,既存のクラスレベル変更予測とメソッドレベル変更予測の比較を行うことでメソッドレベル変更予測の有用性を調査した.調査の結果,メソッドレベル変更予測とクラスレベル変更予測の予測精度に有意な差はなかったものの,双方をメソッドレベルでの正解に基づき評価した場合,accuracy,F1-scoreの評価指標で,メソッドレベル変更予測の結果がより高精度となった.
@article{sugimori-sigss202403,
author = {杉森 裕斗 and 林 晋平},
title = {異なる粒度におけるソフトウェア変更予測結果の比較},
journal = {電子情報通信学会技術研究報告},
volume = 123,
number = 414,
pages = {37--42},
year = 2024,
month = {mar},
}
[sugimori-sigss202403]: as a page
頻出するソースコード変更を変更パターンとして抽出し,それを推薦することで,実例に基づく変更の提案が可能になり,開発者の労力の軽減や,ソースコードの品質の向上が期待できる.本論文では,変更を自動で適用でき,多言語へ容易に移植できる,抽象構文木に基づく変更パターンの抽出と自動適用手法を提案する.変更されたソースコードから生成した抽象構文木をチャンク単位に分割し,抽象構文木の頂点を正規化することで変更パターンを生成する.変更パターンの元となったソースコードを利用して自動適用結果のソースコードを生成することで,個別の構文解析器の機能に頼らない自動適用を行う.手法の評価では,提案手法はトークン列に基づく既存の変更パターン抽出手法と比較して高い割合で構文的な正しさを保った変更パターンを推薦できた.
@article{higuchi-sigss202403,
author = {樋口 結子 and 陳 磊 and 林 晋平},
title = {抽象構文木に基づくソースコード変更パターンの抽出と自動適用},
journal = {電子情報通信学会技術研究報告},
volume = 123,
number = 414,
pages = {43--48},
year = 2024,
month = {mar},
}
[higuchi-sigss202403]: as a page
コードクローンへの一貫性の無い変更は,ソフトウェアの欠陥の原因となる.コードクローンの検出や変更抽出に基づいた,一貫した変更を支援する手法は多く存在する.しかし,現代の多様なプログラミング言語や開発環境で利用できるコードクローンの変更支援ツールは存在しない.本論文では,言語に依存せず,多様な環境で利用できるツールICCheck の試作について報告する.既存の多言語対応のクローン検索手法を用いること,およびGit のインタフェースのみに依存することで,多様な環境で利用できるコードクローンの変更支援ツールを試作した.複数のオープンソースリポジトリで,言語に依存しない本ツールの動作を確認した.また,Language Server Protocolに対応することにより,少ない努力で複数の開発環境への連携が可能なことを確認した.
@article{toki-ses2024,
author = {阿部 元輝 and 林 晋平},
title = {環境適応性の高いコードクローン変更支援ツールの試作},
pages = {109--116},
year = 2024,
month = {sep},
}
[toki-ses2024]: as a page
@article{nagaki-ses2024,
author = {永木 郁也 and 林 晋平},
title = {多様なプログラミング言語に対するリファクタリングの検出に向けて},
year = 2024,
month = {sep},
}
[nagaki-ses2024]: as a page
@article{hayashi-ieicet202302,
author = {Shinpei Hayashi},
title = {FOREWORD: Special Section on Empirical Software Engineering},
journal = {IEICE Transactions on Information and Systems},
volume = {E106-D},
number = 2,
pages = 137,
year = 2023,
month = {feb},
}
[hayashi-ieicet202302]: as a page
@article{hayashi-jssst202305a,
author = {林 晋平 and 横山 哲郎 and 名倉 正剛 and 井垣 宏},
title = {日本ソフトウェア科学会第39回大会報告},
journal = {コンピュータソフトウェア},
volume = 40,
number = 2,
pages = {61--72},
year = 2023,
month = {may},
}
[hayashi-jssst202305a]: as a page
Developers often refactor source code to improve its quality during software development. A challenge in refactoring is to determine if it can be applied or not. To help with this decision-making process, we aim to search for past refactoring cases that are similar to the current refactoring scenario. We have designed and implemented a system called RefSearch that enables users to search for refactoring cases through a user-friendly query language. The system collects refactoring instances using two refactoring detectors and provides a web interface for querying and browsing the cases. We used four refactoring scenarios as test cases to evaluate the expressiveness of the query language and the search performance of the system. RefSearch is available at https://github.com/salab/refsearch.
@inproceedings{toki-icsme2023,
author = {Motoki Abe and Shinpei Hayashi},
title = {{RefSearch}: A Search Engine for Refactoring},
booktitle = {Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution},
pages = {547--552},
year = 2023,
month = {oct},
}
[toki-icsme2023]: as a page
This study reports the results of applying the cross-lingual bug localization approach proposed by Xia et al. to industrial software projects. To realize cross-lingual bug localization, we applied machine translation to non-English descriptions in the source code and bug reports, unifying them into English-based texts, to which an existing English-based bug localization technique was applied. In addition, a prototype tool based on BugLocator was implemented and applied to two Japanese industrial projects, which resulted in a slightly different performance from that of Xia et al.
@inproceedings{hayashi-icsme2023,
author = {Shinpei Hayashi and Takashi Kobayashi and Tadahisa Kato},
title = {Evaluation of Cross-Lingual Bug Localization: Two Industrial Cases},
booktitle = {Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution},
pages = {495--499},
year = 2023,
month = {oct},
}
[hayashi-icsme2023]: as a page
Bug localization is an important aspect of software maintenance because it can locate modules that need to be changed to fix a specific bug. Although method-level bug localization is helpful for developers, there are only a few tools and techniques for this task; moreover, there is no large-scale framework for their evaluation. In this paper, we present FinerBench4BL, an evaluation framework for method-level information retrieval-based bug localization techniques, and a comparative study using this framework. This framework was semi-automatically constructed from Bench4BL, a file-level bug localization evaluation framework, using a repository transformation approach. We converted the original file-level version repositories provided by Bench4BL into method-level repositories by repository transformation. Method-level data components such as oracle methods can also be automatically derived by applying the oracle generation approach via bug-commit linking in Bench4BL to the generated method repositories. Furthermore, we tailored existing file-level bug localization technique implementations at the method level. We created a framework for method-level evaluation by merging the generated dataset and implementations. The comparison results show that the method-level techniques decreased accuracy whereas improved debugging efficiency compared to file-level techniques.
@inproceedings{tsumita-saner2023,
author = {Shizuka Tsumita and Shinpei Hayashi and Sousuke Amasaki},
title = {Large-Scale Evaluation of Method-Level Bug Localization with {FinerBench4BL}},
booktitle = {Proceedings of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering},
pages = {815--824},
year = 2023,
month = {mar},
}
[tsumita-saner2023]: as a page
Some documents, such as use case descriptions, describe features consisting of multiple concepts with following a procedural flow. Because existing feature location techniques lack a relation between concepts in such features, it is difficult to identify the concepts in the source code with high accuracy. This paper presents a technique to locate concepts in a feature described in a structured document consisting of multiple procedural steps, such as a use case description, using dependency between the concepts. We apply an existing concept location technique to descriptions of concepts and obtain a list of modules. Modules failing to match the dependency between concepts are filtered out. Then, we can obtain a more precise list of modules. The conducted experiment underscores the effectiveness of our technique.
@inproceedings{hayashi-quors2023,
author = {Shinpei Hayashi and Teppei Kato and Motoshi Saeki},
title = {Locating Procedural Steps in Source Code},
booktitle = {Proceedings of the 47th IEEE Computer Software and Applications Conference},
pages = {1607--1612},
year = 2023,
month = {jun},
}
[hayashi-quors2023]: as a page
@misc{sugimori-msrasiasummit2023,
author = {Hiroto Sugimori and Shinpei Hayashi},
title = {Towards Fine-grained Software Change Prediction},
year = 2023,
month = {jul},
}
[sugimori-msrasiasummit2023]: as a page
本論文では,バグレポート中の自然言語記述等からバグを含むモジュールを特定する情報検索型のバグ箇所検索において,ソースコード中の低品質箇所である不吉な臭いの情報を活用して精度を向上させる手法を提案する.また,臭いの情報の複数の定量化戦略を比較し,精度に貢献する特徴について分析する.
@misc{hayashi-fit2023,
author = {林 晋平},
title = {不吉な臭いを利用したバグ箇所検索},
howpublished = {第22回情報科学技術フォーラム},
year = 2023,
month = {sep},
}
[hayashi-fit2023]: as a page
ソースコード中の識別子の名前変更は頻繁に行われる.ある識別子を名前変更する際,変更に関わる単語と同じ命名意図を持つ単語を含む識別子は一括で変更すべきだが,それらを特定することは難しい.本論文では,開発者が名前変更した識別子と同時に変更すべき識別子を特定し,名前変更を推薦する手法RENASを提案する.RENASでは,まず名前変更された識別子とプログラム上の関係性がある識別子を抽出する.次に,各識別子を構成する単語列を,語の省略や語形変化の影響を取り除いた上で抽出する.そして,開発者による名前変更を単語の追加・削除・置換とみなし,同様の変更を適用できる識別子を,変更後の名前を添えて推薦する.実リポジトリから抽出した名前変更を用いRENASの評価を目視により行った結果,F値0.61であり既存手法を上回る精度であった.
@article{osumi-sigse202303,
author = {大住 祐輝 and 林 晋平},
title = {語形と省略を考慮した一括名前変更リファクタリング支援},
journal = {情報処理学会研究報告},
volume = {2023-SE-213},
number = 10,
pages = {1--8},
year = 2023,
month = {mar},
}
[osumi-sigse202303]: as a page
本研究では要求リストを作成するための顧客と分析者の質疑応答の履歴をグラフデータベースで構造化,蓄積し,データベースシステムの検索機構や可視化機構を使い,質疑応答の支援,質疑応答によって進化していく要求リストの版管理支援を行うシステムを開発する.オブジェクト指向を用いて質疑応答を格納するためのグラフデータベースの設計を行う.実装には,Neo4jを使用し,データベース操作言語Cypherを使い,未承諾事項の検索や版管理機能が実現できることを図書館の事例を用いて確認した.
@article{imahori-sigss202303,
author = {今堀 由唯 and 加藤 潤三 and 林 晋平 and 大西 淳 and 佐伯 元司},
title = {要求獲得における質疑応答プロセスのグラフデータベースを用いた支援},
journal = {電子情報通信学会技術研究報告},
volume = 122,
number = 432,
pages = {7--12},
year = 2023,
month = {mar},
}
[imahori-sigss202303]: as a page
バグの原因となっているソースコードを特定するアプローチとして,バグレポートと関連の高いファイルを自動的に推薦するバグ箇所検索の研究が進められてきた.しかし,既存研究のほとんどは英語で記述されたバグレポートのみを扱っており,非英語のバグレポートに対してバグ箇所検索を行う言語横断バグ箇所検索は機械翻訳に頼る研究しか行われておらず精度も芳しくない.本稿では,言語横断バグ箇所検索に対し単語埋め込みを適用し,非英語バグレポートとソースコードの間のギャップを埋めるアプローチを検討した.日本語バグレポートとソースコードの類似性を多言語単語埋め込みにより計算する方法と,日本語バグレポートを英語に機械翻訳したものとソースコードの類似性を単言語単語埋め込みで計算する方法を比較検証した結果,前者の精度が高くなった.また,この手法の精度は既存の機械翻訳を用いた言語横断バグ箇所検索よりも,MRRで10\%,MAPで27\%向上した.
@article{ruuu0048-sigss202303,
author = {大柴 昂輝 and 林 晋平},
title = {単語埋め込みによる言語横断バグ箇所検索},
journal = {電子情報通信学会技術研究報告},
volume = 122,
number = 432,
pages = {37--42},
year = 2023,
month = {mar},
}
[ruuu0048-sigss202303]: as a page
開発者はソフトウェアを開発する上で,ソースコードの品質を向上するためリファクタリングを行うことがある.リファクタリングを行う上での問題点として,その適用の可否の判断が難しいことがある.現在行おうとするリファクタリングに近い状況の過去のリファクタリング事例を検索することで,リファクタリングの適用判断を補助したい.精度が高いリファクタリング検出器を2つ用いて事例を検出し,独自に設計した理解が容易なクエリ言語を用いて,ウェブブラウザで閲覧可能なユーザーインターフェイスから事例の検索が可能なシステムRefSearchを設計,実装した.また,検索を行いたいリファクタリングの条件を4つ設定し,クエリの表現能力及びシステムの検索性能に関する評価を与えた.
@article{toki-sigss202303,
author = {阿部 元輝 and 林 晋平},
title = {{RefSearch}:リファクタリング事例検索システムの試作},
journal = {電子情報通信学会技術研究報告},
volume = 122,
number = 432,
pages = {73--78},
year = 2023,
month = {mar},
}
[toki-sigss202303]: as a page
リファクタリング操作をコミットから発見するため,多くのリファクタリング検出手法が提案されている.しかし,既存の手法では事前に定義した検出規則に対応するリファクタリング以外を検出できず,検出規則の定義にも労力が伴う.本論文ではソースコードの変更差分の学習に基づきリファクタリングを含むコミットを識別する手法を提案する.提案手法はコミット間の変更差分を編集スクリプトとして抽出し,Reccurent Neural NetworkやGraph Neural Networkを用いてそのコミットがリファクタリングを含むコミットかの2値分類を行う.コミットと,そのコミットがリファクタリングを含むかのラベルのみから学習を行うため,検出規則を定義する必要がない.オープンソースソフトウェアプロジェクトのコミット履歴を利用した評価の結果,提案手法は開発者がリファクタリングとみなしたコミットを既存手法よりも正確に識別できた.
@article{saoki-sigse202303,
author = {青木 俊介 and 林 晋平},
title = {ソースコードの変更差分の学習に基づくリファクタリングコミットの識別},
journal = {情報処理学会研究報告},
volume = {2023-SE-213},
number = 11,
pages = {1--8},
year = 2023,
month = {mar},
}
[saoki-sigse202303]: as a page
Bug Localizationとはバグの原因箇所を特定する,ソフトウェア保守において重要な作業である.メソッドレベルで自動でBug Localizationを行う手法は開発者にとって有用なものの,手法数が少なく,評価可能なフレームワークも存在しないため知見が少ない.本論文では,既存の情報検索を用いたBug Localization手法をメソッドレベルで大規模に比較可能なフレームワークであるFinerBench4BLを提案し,推薦モジュール粒度の違いが手法の精度や推薦時に考慮する追加情報,実行時間へ与える影響の調査を行う.本フレームワークはリポジトリ変換によりBench4BLのデータセットをメソッドレベルに再構築したメソッドリポジトリからデータセットを作成する.またメソッドリポジトリに基づき,既存のファイルレベルの手法を少ない修正でメソッドレベルに変更することで,データセットと組み合わせた評価フレームワークとした.FinerBench4BLを利用した調査の結果,メソッドレベルの手法は精度が低下する一方で,デバッグに必要な労力が削減されることが明らかになった.またメソッドレベルへの変更に伴い,既存手法が考慮する追加情報の影響が小さくなることから,多角的にソースコードの情報を利用し,より高精度なバグ箇所の推薦を可能にする調整が必要であることがわかった.メソッドレベルへの変更に伴う実行時間の増加は許容範囲内であった.
@inproceedings{tsumita-ses2023,
author = {積田 静夏 and 天嵜 聡介 and 林 晋平},
title = {モジュール粒度の違いがBug Localization手法へ与える影響の調査},
booktitle = {ソフトウェアエンジニアリングシンポジウム2023予稿集},
pages = {48--57},
year = 2023,
month = {aug},
}
[tsumita-ses2023]: as a page
コードレビューやリファクタリング事例収集の際に,コミットのようなソースコード変更からリファクタリングを識別する必要がある.本稿では,人手でのリファクタリングの識別における課題を整理した上で,その労力を低減させるための識別支援環境について述べる.この環境では,変更前後で共通する部分トークン列を検出し強調表示することにより,使用者によるコード片の移動の分析を補助し,リファクタリングと機能変更の区別やリファクタリングに含まれている異常の発見を支援する.
@inproceedings{ueno-ses2023,
author = {上野 尊義 and 林 晋平},
title = {ソースコード変更に含まれるリファクタリングの識別環境の構築に向けて},
booktitle = {ソフトウェアエンジニアリングシンポジウム2023予稿集},
pages = {240--241},
year = 2023,
month = {aug},
}
[ueno-ses2023]: as a page
@article{hayashi-emse202306,
author = {Shinpei Hayashi and Yann-Ga\:{e}l Gu\'{e}h\'{e}neuc and Michel R.V. Chaudron},
title = {Introduction to the special issue on program comprehension},
journal = {Empirical Software Engineering},
volume = 28,
number = 3,
pages = {68:1--2},
year = 2023,
month = {jun},
}
[hayashi-emse202306]: as a page
Use case modeling is popular to represent the functionality of the system to be developed, and it consists of two parts: a use case diagram and use case descriptions. Use case descriptions are structured text written in natural language, and the usage of natural language can lead to poor descriptions such as ambiguous, inconsistent and/or incomplete descriptions. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of the produced use case model. This paper proposes a technique to automate detecting bad smells of use case descriptions, i.e., symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalog of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contributions of this paper are the developed catalog of bad smells and the automated detection of these bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and a recall ratio of 0.981. Through evaluating our catalog and the automated tool, we found additional six bad smells and two metrics. Then, we obtained the precision of 0.596 and the recall of 1.00 by our final version of the automated tool.
@article{yotaro-ieicet202205,
author = {Yotaro Seki and Shinpei Hayashi and Motoshi Saeki},
title = {Cataloging Bad Smells in Use Case Descriptions and Automating Their Detection},
journal = {IEICE Transactions on Information and Systems},
volume = {105-D},
number = 5,
pages = {849--863},
year = 2022,
month = {may},
}
[yotaro-ieicet202205]: as a page
@article{igarashi-jssst202208,
author = {五十嵐 悠紀 and 川端 英之 and 河野 健二 and 千葉 滋 and 中澤 仁 and 林 晋平},
title = {特集「博士論文からの解説論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 39,
number = 3,
year = 2022,
month = {aug},
}
[igarashi-jssst202208]: as a page
Goal refinement is a crucial step in goal-oriented requirements analysis to create a goal model of high quality. Poor goal refinement leads to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of produced goal models. This paper proposes a technique to automate detecting bad smells of goal refinement, symptoms of poor goal refinement. At first, to clarify bad smells, we asked subjects to discover poor goal refinement concretely. Based on the classification of the specified poor refinement, we defined four types of bad smells of goal refinement: Low Semantic Relation, Many Siblings, Few Siblings, and Coarse Grained Leaf, and developed two types of measures to detect them: measures on the graph structure of a goal model and semantic similarity of goal descriptions. We have implemented a supporting tool to detect bad smells and assessed its usefulness by an experiment.
@article{hayashi-ieicet202205,
author = {Shinpei Hayashi and Keisuke Asano and Motoshi Saeki},
title = {Automating Bad Smell Detection in Goal Refinement of Goal Models},
journal = {IEICE Transactions on Information and Systems},
volume = {105-D},
number = 5,
pages = {837--848},
year = 2022,
month = {may},
}
[hayashi-ieicet202205]: as a page
@article{hayashi-ieicet202201,
author = {Shinpei Hayashi},
title = {FOREWORD: Special Section on Empirical Software Engineering},
journal = {IEICE Transactions on Information and Systems},
volume = {E105-D},
number = 1,
pages = 1,
year = 2022,
month = {jan},
}
[hayashi-ieicet202201]: as a page
@article{umatani-jssst202202,
author = {馬谷 誠二 and 河合 栄治 and 宋 剛秀 and 中澤 仁 and 林 晋平},
title = {特集「ソフトウェア論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 39,
number = 1,
pages = 2,
year = 2022,
month = {feb},
}
[umatani-jssst202202]: as a page
背景:ソースコードに記述されたプログラム要素の履歴追跡は,プログラムの理解や修正の支援に有用である.リポジトリ変換に基づく履歴追跡手法Historageは,開発者が慣れ親しんだ履歴管理インタフェースにより細粒度の履歴追跡を行えるという特徴を持つ.課題:既存のリポジトリ変換ツールは,変換時にオブジェクトデータベースからのファイルの展開と格納を繰り返すコストが大きい,増分的な変換を行わないため変換元リポジトリからの更新を伴う開発での利用に適さない,という点で変換時間に課題がある.手法:本論文では,変換時間の削減を実現した細粒度履歴追跡向けリポジトリ変換ツールHistorincの設計と実装について述べる.本ツールでは,オブジェクト変換の対応関係の記録に基づくリポジトリ変換フレームワークgit-steinを利用することにより,展開と格納を抑制する.また,更新前の変換時に記録した対応関係を更新後に持ち越し,それを用いて不要な書換えを抑制することにより増分的な変換を可能とする.予備評価:既存の変換ツールと実行時間の比較を行った.また,持ち越す対応関係の種類を比較した.既存のツールと比較して4倍以上の速度性能向上を達成したこと,対応関係の持ち越しが有効であることを確認した.
Background: Tracking program elements in source code is useful for program comprehension, supporting code edit, and so on. Historage, a history tracking approach based on repository transformation, enables developers to use a familiar interface to track a finer-grained history. Problem: Existing repository transformation tools have performance issues: (1) their transformation steps include the expansion and archiving of snapshots from the object database, and (2) they cannot transform repositories incrementally, which are unsuitable when using them for supporting software development activities. Method: In this paper, we describe the design and implementation of a transformation tool, Historinc, that reduces the transformation time. We use git-stein, a repository transformation framework based on the recording of the mapping between objects, to suppress unnecessary expansion and archiving of files. In addition, we store the mapping and use it later to support incremental transformation. Preliminary Evaluation: We compared the transformation time of our tool with an existing one. Furthermore, we compared performance when using different kinds of mappings to be stored. As a result, we found that our tool is more than four times faster than the existing tool and that storing object mapping is effective.
@article{shiba-jssst202211,
author = {柴 駿太 and 林 晋平},
title = {{Historinc}: 細粒度履歴追跡のための増分的なリポジトリ変換ツール},
journal = {コンピュータソフトウェア},
volume = 39,
number = 4,
pages = {75--85},
year = 2022,
month = {nov},
}
[shiba-jssst202211]: as a page
Although literature has noted the effects of branch handling strategies on change recommendation based on evolutionary coupling, they have been tested in a limited experimental setting. Additionally, the branches characteristics that lead to these effects have not been investigated. In this study, we revisited the investigation conducted by Kovalenko et al. on the effect to change recommendation using two different branch handling strategies: including changesets from commits on a branch and excluding them. In addition to the setting by Kovalenko et al., we introduced another setting to compare: extracting a changeset for a branch from a merge commit at once. We compared the change recommendation results and the similarity of the extracted co-changes to those in the future obtained using two strategies through 30 open-source software systems. The results show that handling commits on a branch separately is often more appropriate in change recommendation, although the comparison in an additional setting resulted in a balanced performance among the branch handling strategies. Additionally, we found that the merge commit size and the branch length positively influence the change recommendation results.
@inproceedings{k_isemoto-icpc2022,
author = {Keisuke Isemoto and Takashi Kobayashi and Shinpei Hayashi},
title = {Revisiting the Effect of Branch Handling Strategies on Change Recommendation},
booktitle = {Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension},
pages = {162--172},
year = 2022,
month = {may},
}
[k_isemoto-icpc2022]: as a page
The renaming of program identifiers is the most common refactoring operation. Because some identifiers are related to each other, developers may need to rename related identifiers together. Aims: To understand how developers rename multiple identifiers simultaneously, it is necessary to consider the relationships between identifiers in the program and the brief matching for non-identical but semantically similar identifiers. Method: We investigate the relationships between co-renamed identifiers and identify the types of their relationships that contribute to improving the recommendation using more than 1M of renaming instances collected from the histories of open-source software projects. We also evaluate and compare the impact of co-renaming and the relationships between identifiers when inflections occur in the words in identifiers are taken into account. Results: We revealed several relationships of identifiers that are frequently found in the co-renamed identifiers, such as the identifiers of methods in the same class or an identifier defining a variable and another used for initializing the variable, depending on the type of the renamed identifiers. Additionally, the consideration of inflections did not affect the tendency of the relationships. Conclusion: These results suggest an approach that prioritizes the identifiers to be recommended depending on their types and the type of the renamed identifier.
@inproceedings{osumi-apsec2022,
author = {Yuki Osumi and Naotaka Umekawa and Hitomi Komata and Shinpei Hayashi},
title = {Empirical Study of Co-Renamed Identifiers},
booktitle = {Proceedings of the 29th Asia-Pacific Software Engineering Conference},
pages = {71--80},
year = 2022,
month = {dec},
}
[osumi-apsec2022]: as a page
Detecting refactorings in commit history is essential to improve the comprehension of code changes in code reviews and to provide valuable information for empirical studies on software evolution. Several techniques have been proposed to detect refactorings accurately at the granularity level of a single commit. However, refactorings may be performed over multiple commits because of code complexity or other real development problems, which is why attempting to detect refactorings at single-commit granularity is insufficient. We observe that some refactorings can be detected only at coarser granularity, that is, changes spread across multiple commits. Herein, this type of refactoring is referred to as coarse-grained refactoring (CGR). We compared the refactorings detected on different granularities of commits from 19 open-source repositories. The results show that CGRs are common, and their frequency increases as the granularity becomes coarser. In addition, we found that Move-related refactorings tended to be the most frequent CGRs. We also analyzed the causes of CGR and suggested that CGRs will be valuable in refactoring research.
@inproceedings{chenlei-icpc2022,
author = {Lei Chen and Shinpei Hayashi},
title = {Impact of Change Granularity in Refactoring Detection},
booktitle = {Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension},
pages = {565--569},
year = 2022,
month = {may},
}
[chenlei-icpc2022]: as a page
本論文では,ソフトウェア開発プロジェクトの開発履歴を静的に分析することによりプロジェクト特有のバグパターンを特定するアプローチAmmoniaを提案する.得られたプロジェクト特有バグパターンに基づく警告により,一般的なパターンに限定された通常の静的解析ツールの出力を補完できる.
@misc{hayashi-fit2022,
author = {林 晋平},
title = {{Ammonia}: プロジェクト特有バグパターンの導出法},
howpublished = {第21回情報科学技術フォーラム},
year = 2022,
month = {sep},
}
[hayashi-fit2022]: as a page
@misc{takahashi-a-at-ses2022,
author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {不吉な臭いを利用したBug Localization},
year = 2022,
month = {sep},
}
[takahashi-a-at-ses2022]: as a page
自動バグ限局手法の評価において,限局対象として利用するソースコードを特定するスナップショット戦略の選択が結果に影響を及ぼす.既存手法での評価で用いられる戦略では,選択されるソースコードと本来限局対象となるべきソースコードとの間に時間的なギャップがあり,このギャップにより引き起こされたソースコードの変化の影響を受けるため,評価の妥当性の脅威となる.本論文では,バグ限局手法において各バグレポートに対しバグ修正直前のコミット時点でのソースコードを評価に用いるFixing Commit戦略を提案し,これを用いることにより,利用するスナップショット戦略の違いが及ぼす影響の大きさを調査する.Fixing Commit戦略と既存手法で用いられた戦略による精度を比較した結果,Fixing Commit戦略を用いることでバグ限局精度が向上したことから,既存のバグ限局手法が過小評価されている可能性を示した.また,ソースコードの名前変化と内容変化とそれぞれに対する精度の変化を計測したところ,内容変化よりも名前変化がより大きな影響を与えていた.
@article{r_mitsui-sigss202201,
author = {三井 亮称 and 林 晋平},
title = {ソースコードの時間変化がバグ限局に与える影響の調査},
journal = {電子情報通信学会技術研究報告},
volume = 121,
number = 318,
pages = {40--45},
year = 2022,
month = {jan},
}
[r_mitsui-sigss202201]: as a page
デバッグの労力を軽減するため,バグ発生箇所を自動で推薦するBug Localization手法がある.メソッドレベルでのBug Localizationは開発者にとって有用であるものの,そのツールが少なく,評価用の大規模なフレーム ワークも存在しないため,知見が少ない.本論文では,リポジトリ変換に基づいたBug Localization手法の細粒度化とその評価を行った.まず,ファイルレベルの評価フレームワークであるBench4BLのデータセットをリポジトリ変換を行いメソッドレベルに変換する.生成したメソッドリポジトリを利用してBench4BL が提供する手法をメソッドレベルに変換する.これらを組み合わせてメソッドレベルの評価用フレームワークを構築し,評価に用いる.以上のアプローチは既存手法を少ない修正でメソッドレベルへ変換可能である.評価の結果,変換後の手法は良い性能を示さなかったが,変換前後での性能差やメソッドレベルに適した設定の存在を明らかにした.
@article{tsumita-sigss202207,
author = {積田 静夏 and 林 晋平 and 天嵜 聡介},
title = {リポジトリ変換によるBug Localization手法の細粒度化とその評価},
journal = {電子情報通信学会技術研究報告},
volume = 122,
number = 138,
pages = {37--42},
year = 2022,
month = {jul},
}
[tsumita-sigss202207]: as a page
あらかじめ定めた目的をよく満足するようなリファクタリング列を探索するリファクタリング探索においては,典型的な目的として用いられるコード品質の向上だけでなく,他の視点も求められる.得られたリファクタリング列は適用に先立ちレビューを受ける必要があり,レビューの労力が大きい場合は適用に至らないこともある.本論文では,コード品質を向上させ,かつレビュー労力が少ないようなリファクタリング列を探索し,開発者に推薦する多目的探索法を提案する.コード品質の評価には静的解析を用いる.レビュー労力を見積もるためにコードの所有権と開発者間の関係性を分析する.非優越ソート遺伝的アルゴリズムを用いて,多目的における最適なトレードオフを求める.複数のオープンソースリポジトリを用いて提案手法を評価した.
Search-based refactoring is to search for a sequence of refactorings that achieves a specific objective. Although a typical objective is to improve code quality, a different perspective is also required; the searched sequence needs to be reviewed before its application to the code, and it may not be applied if it requires a high review effort. We propose a multi-objective search-based technique that searches for a sequence of refactorings that can both improve the code quality and require low review effort to recommend to developers. We use static analysis to evaluate code quality and adopt code ownership and relationships among developers to assess the review effort required.A non-dominated sorting genetic algorithm is used to find the best trade-off for the multi-objectives. We evaluate our technique on open-source repositories to prove its effectiveness.
@article{chenlei-sigss202207,
author = {陳 磊 and 林 晋平},
title = {探索に基づくリファクタリング推薦におけるレビュー工数見積もりの利用},
journal = {電子情報通信学会技術研究報告},
volume = 122,
number = 138,
pages = {103--108},
year = 2022,
month = {jul},
}
[chenlei-sigss202207]: as a page
リポジトリマイニングの研究結果に影響を及ぼす,不適切なデータの存在や工夫が指摘されている.しかし,これらの指摘の反映を個別に行うのは効率が悪い.また,指摘を行う論文で用いられたツールは公開されなかったり,他のツールと組み合わせることが難しかったりする.本論文では,これらの指摘の反映を,解析の前処理としてリポジトリを直接書き換えることにより実現するツールを提案する.本ツールを利用することで,その後に行う解析によらず,統一的に指摘の反映を行うことができるようになる.本論文では,既存研究を分析し,その指摘の反映方法をツールによって実現する方法を考察した.また,一部の処理を実際にツールに実装し,ツールを用いて前処理を施したリポジトリとそうでないリポジトリにおけるリポジトリマイニング手法の結果を比較する実験を行った.
@article{shiba-sigss202207,
author = {柴 駿太 and 林 晋平},
title = {リポジトリマイニング手法に対する前処理としての履歴書き換えツールの試作},
journal = {電子情報通信学会技術研究報告},
volume = 122,
number = 138,
pages = {31--36},
year = 2022,
month = {jul},
}
[shiba-sigss202207]: as a page
変更パターンの学習に基づく変更推薦手法において,プロジェクト外の変更を対象として学習を行うと,プロジェクトに固有のパターンによる受け入れ不可能な推薦が含まれてしまう.本論文では,多くのプロジェクトに出現する変更パターンは一般性が高いという予備調査結果をもとに,プロジェクト共通性の高いパターンを優先することで,受け入れ可能性を備え,かつ使用者が望む一般性レベルの推薦を実行する手法を提案する.手法では,変更パターンの出現プロジェクト数を新しいメトリクスとして定義し,推薦候補のフィルタリングと順位付けに用いる.また,プロジェクト共通性を用いて変更パターンを一般性の高い群と低い群に分類し,使用者が選択した群を推薦に用いることができるようにする.提案手法の評価では,手法は受け入れ可能性を備える推薦の割合を向上させた.オプションを切り替えることによって,一般性の高い推薦を選択可能な一方で,低い推薦の選択性は高くなかった.
@article{ando-n-sigss202203,
author = {安藤 直樹 and 林 晋平},
title = {ソースコード変更パターンのプロジェクト共通性を考慮した変更推薦},
journal = {電子情報通信学会技術研究報告},
volume = 121,
number = 416,
pages = {72--77},
year = 2022,
month = {mar},
}
[ando-n-sigss202203]: as a page
@misc{chenlei-ses2022,
author = {Lei Chen and Shinpei Hayashi},
title = {Impact of Change Granularity in Refactoring Detection},
howpublished = {ソフトウェアエンジニアリングシンポジウム2022},
year = 2022,
month = {sep},
}
[chenlei-ses2022]: as a page
Xiaらにより提案された言語横断バグ箇所検索手法を,日本語を用いた開発プロジェクトへ適用した結果を報告する.バグレポートおよびソースコード中の日本語記述に機械翻訳を適用し,それぞれを英語に統一したうえで,英語記述に基づくバグ箇所検索手法を適用することにより,言語横断バグ箇所検索を実現した.バグ箇所検索手法BugLocatorに基づくツールを試作し,2 つの企業プロジェクトに適用したところ,Xiaらの結果とは異なり,良好な結果が得られた.
@inproceedings{hayashi-ses2022,
author = {林 晋平 and 小林 隆志 and 高井 康勢 and 加藤 正恭},
title = {言語横断バグ箇所検索手法の日本語記述への適用可能性},
booktitle = {ソフトウェアエンジニアリングシンポジウム2022予稿集},
pages = {131--136},
year = 2022,
month = {sep},
}
[hayashi-ses2022]: as a page
ソフトウェア開発の要求仕様書の作成過程では顧客の要求を集めて分析者が要求リストを作成していく.しかし質疑応答を繰り返すことで変更が繰り返され議論に矛盾が生じたり,議論をせずに未承諾事項が残ったりすることが度々発生する.これらが発生してしまうと,顧客の意図とは合わない要求,要求の欠落が起こったりする.要求分析者が質疑応答結果に基づく変更内容をすべて把握しておくことはシステム等の規模が大きくなればなるほど困難であるという問題がある.本研究では要求リストを作成するための顧客と分析者の質疑応答の履歴をグラフデータベースに構造化して蓄積し,データベースシステムの検索機構や可視化機構を使って,質疑応答履歴における要求リストの作成を支援するシステムの実現を目的とする.本論文ではまずそのための質疑応答を格納するためのグラフデータベースの設計と実装について述べる.質疑応答履歴をオブジェクト指向でモデル化し,Neo4jでデータベースを構築する.Neo4jのデータベース操作言語Cypherを使い,未承諾事項の検索などの機能が実現できることを確認した.
@inproceedings{imahori-fose2022,
author = {今堀 由唯 and 加藤 潤三 and 林 晋平 and 大西 淳 and 佐伯 元司},
title = {要求獲得における質疑応答履歴のグラフデータベースシステムの実現},
booktitle = {第29回ソフトウェア工学の基礎ワークショップ予稿集},
pages = {123--128},
year = 2022,
month = {nov},
}
[imahori-fose2022]: as a page
@misc{tsumita-fose2022,
author = {積田 静夏 and 林 晋平 and 天嵜 聡介},
title = {リポジトリ変換によるBug Localization手法の細粒度化とその影響},
year = 2022,
month = {nov},
}
[tsumita-fose2022]: as a page
@misc{shiba-fose2022,
author = {柴 駿太 and 林 晋平},
title = {{Preform}: マイニング前処理向けGitリポジトリ書換えツール},
year = 2022,
month = {nov},
}
[shiba-fose2022]: as a page
@misc{osumi-ncjssst2022,
author = {大住 祐輝 and 林 晋平},
title = {識別子の命名意図に基づく一括名前変更リファクタリング支援に向けて},
howpublished = {日本ソフトウェア科学会第39回大会},
year = 2022,
month = {aug},
}
[osumi-ncjssst2022]: as a page
Code smells can be detected using tools such as a static analyzer that detects code smells based on source code metrics. Developers perform refactoring activities based on the result of such detection tools to improve source code quality. However, such an approach can be considered as reactive refactoring, i.e., developers react to code smells after they occur. This means that developers first suffer the effects of low-quality source code before they start solving code smells. In this study, we focus on proactive refactoring, i.e., refactoring source code before it becomes smelly. This approach would allow developers to maintain source code quality without having to suffer the impact of code smells. To support the proactive refactoring process, we propose a technique to detect decaying modules, which are non-smelly modules that are about to become smelly. We present empirical studies on open source projects with the aim of studying the characteristics of decaying modules. Additionally, to facilitate developers in the refactoring planning process, we perform a study on using a machine learning technique to predict decaying modules and report a factor that contributes most to the performance of the model under consideration.
@article{natthawute-ieicet202110,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {Supporting Proactive Refactoring: An Exploratory Study on Decaying Modules and Their Prediction},
journal = {IEICE Transactions on Information and Systems},
volume = {E104-D},
number = 10,
pages = {1601--1615},
year = 2021,
month = {oct},
}
[natthawute-ieicet202110]: as a page
In the world of the Internet of Things (IoT), heterogeneous systems and devices need to be connected and exchange data with others. How data exchange can be automatically realized becomes a critical issue. An information model (IM) is frequently adopted and utilized to solve the data interoperability problem. Meanwhile, as IoT systems and devices can have different IMs with different modeling methodologies and formats such as UML, IEC 61360, etc., automated data interoperability based on various IMs is recognized as an urgent problem. In this paper, we propose an approach to automate the data interoperability, i.e. data exchange among similar entities in different IMs. First, similarity scores among entities are calculated based on their syntactic and semantic features. Then, in order to precisely get similar candidates to exchange data, a concept of class distance calculated with a Virtual Distance Graph (VDG) is proposed to narrow down obtained similar properties for data exchange. Through analyzing the results of a case study, the class distance based on VDG can effectively improve the precisions of calculated similar properties. Furthermore, data exchange rules can be generated automatically. The results reveal that the approach of this research can efficiently contribute to resolving the data interoperability problem.
@article{wlan-ijseke202103,
author = {Lan Wang and Shinpei Hayashi and Motoshi Saeki},
title = {Applying Class Distance to Decide Similarity on Information Models for Automated Data Interoperability},
journal = {International Journal of Software Engineering and Knowledge Engineering},
volume = 31,
number = 3,
pages = {405--434},
year = 2021,
month = {mar},
}
[wlan-ijseke202103]: as a page
Bug localization is an important aspect of software maintenance because it can locate modules that should be changed to fix a specific bug. Our previous study showed that the accuracy of the information retrieval (IR)-based bug localization technique improved when used in combination with code smell information. Although this technique showed promise, the study showed limited usefulness because of the small number of: 1) projects in the dataset, 2) types of smell information, and 3) baseline bug localization techniques used for assessment. This paper presents an extension of our previous experiments on Bench4BL, the largest bug localization benchmark dataset available for bug localization. In addition, we generalized the smell-aware bug localization technique to allow different configurations of smell information, which were combined with various bug localization techniques. Our results confirmed that our technique can improve the performance of IR-based bug localization techniques for the class level even when large datasets are processed. Furthermore, because of the optimized configuration of the smell information, our technique can enhance the performance of most state-of-the-art bug localization techniques.
@article{takahashi-a-at-jss202108,
author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {An Extensive Study on Smell-Aware Bug Localization},
journal = {Journal of Systems and Software},
volume = 178,
pages = {110986:1--17},
year = 2021,
month = {aug},
}
[takahashi-a-at-jss202108]: as a page
Problem: Modern systems contain parts that are themselves systems. Such complex systems thus have sets of subsystems that have their own variability. These subsystems contribute to the functionality of a whole system-of-systems (SoS). Such systems have a very high degree of variability. Therefore, a modeling technique for the variability of an entire SoS is required to express two different levels of variability: variability of the SoS as a whole and variability of subsystems. If these levels are described together, the model becomes hard to understand. When the variability model of the SoS is described separately, each variability model is represented by a tree structure and these models are combined in a further tree structure. For each node in a variability model, a quantity is assigned to express the multiplicity of its instances per one instance of its parent node. Quantities of the whole system may refer to the number of subsystem instances in the system. From the viewpoint of the entire system, constraints and requirements written in natural language are often ambiguous regarding the quantities of subsystems. Such ambiguous constraints and requirements may lead to misunderstandings or conflicts in an SoS configuration. Approach: A separate notion is proposed for variability of an SoS; one model considers the SoS as an undivided entity, while the other considers it as a combination of subsystems. Moreover, a domain-specific notation is proposed to express relationships among the variability properties of systems, to solve the ambiguity of quantities and establish the total validity. This notation adapts an approach, named Pincer Movement, which can then be used to automatically deduce the quantities for the constraints and requirements. Validation: The descriptive capability of the proposed notation was validated with four examples of cloud providers. In addition, the proposed method and description tool were validated through a simple experiment on describing variability models with real practitioners.
@article{shinbara-ijseke202105,
author = {Daisuke Shimbara and Motoshi Saeki and Shinpei Hayashi and {\O}ystein Haugen},
title = {Handling Quantity in Variability Models for System-of-Systems},
journal = {International Journal of Software Engineering and Knowledge Engineering},
volume = 31,
number = 5,
pages = {693--724},
year = 2021,
month = {may},
}
[shinbara-ijseke202105]: as a page
@misc{takahashi-a-at-ase2021,
author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {An Extensive Study on Smell-Aware Bug Localization},
howpublished = {36th IEEE/ACM International Conference on Automated Software Engineering},
year = 2021,
month = {nov},
}
[takahashi-a-at-ase2021]: as a page
@misc{higo-ase2021,
author = {Yoshiki Higo and Shinpei Hayashi and Shinji Kusumoto},
title = {On Tracking {Java} Methods with {Git} Mechanisms},
howpublished = {36th IEEE/ACM International Conference on Automated Software Engineering},
year = 2021,
month = {nov},
}
[higo-ase2021]: as a page
Primitive types are fundamental components available in any programming language, which serve as the building blocks of data manipulation. Understanding the role of these types in source code is essential to write software. The most convenient way to express the functionality of these variables in the code is through describing them in comments. Little work has been conducted on how often these variables are documented in code comments and what types of knowledge the comments provide about variables of primitive types. In this paper, we present an approach for detecting primitive variables and their description in comments using lexical matching and semantic matching. We evaluate our approaches by comparing the lexical and semantic matching performance in terms of recall, precision, and F-score, against 600 manually annotated variables from a sample of GitHub projects. The performance of our semantic approach based on F-score was superior compared to lexical matching, 0.986 and 0.942, respectively. We then create a taxonomy of the types of knowledge contained in these comments about variables of primitive types. Our study showed that developers usually documented the variables’ identifiers of a numeric data type with their purpose (69.16%) and concept (72.75%) more than the variables’ identifiers of type String which were less documented with purpose (61.14%) and concept (55.46%). Our findings characterise the current state of the practice of documenting primitive variables and point at areas that are often not well documented, such as the meaning of boolean variables or the purpose of fields and local variables.
@inproceedings{mahfouth-msr2021,
author = {Mahfouth Alghamdi and Shinpei Hayashi and Takashi Kobayashi and Christoph Treude},
title = {Characterising the Knowledge about Primitive Variables in {Java} Code Comments},
booktitle = {Proceedings of the 18th IEEE/ACM International Conference on Mining Software Repositories},
pages = {460--470},
year = 2021,
month = {may},
}
[mahfouth-msr2021]: as a page
It is necessary to gather real refactoring instances while conducting empirical studies on refactoring. However, existing refactoring detection approaches are insufficient in terms of their accuracy and coverage. Reducing the manual effort of curating refactoring data is challenging in terms of obtaining various refactoring data accurately. This paper proposes a tool named RefactorHub, which supports users to manually annotate potential refactoring-related commits obtained from existing refactoring detection approaches to make their refactoring information more accurate and complete with rich details. In the proposed approach, the parameters of each refactoring operation are defined as a meaningful set of code elements in the versions before or after refactoring. RefactorHub provides interfaces and supporting features to annotate each parameter, such as the automated filling of dependent parameters, thereby avoiding wrong or uncertain selections. A preliminary user study showed that RefactorHub reduced annotation effort and improved the degree of agreement among users. Source code and demo video are available at https://github.com/salab/RefactorHub
@inproceedings{kuramoto-icpc2021,
author = {Ryo Kuramoto and Motoshi Saeki and Shinpei Hayashi},
title = {{RefactorHub}: A Commit Annotator for Refactoring},
booktitle = {Proceedings of the 29th IEEE/ACM International Conference on Program Comprehension},
pages = {495--499},
year = 2021,
month = {may},
}
[kuramoto-icpc2021]: as a page
@misc{higo-ses2021,
author = {Yoshiki Higo and Shinpei Hayashi and Shinji Kusumoto},
title = {Gitの機能を用いたJavaメソッドの追跡},
howpublished = {ソフトウェアエンジニアリングシンポジウム2021},
year = 2021,
month = {sep},
}
[higo-ses2021]: as a page
設定した開発マイルストーンの実現に必要なモジュールを特定するマイルストーンコンテキスト捜索が自動化できれば,開発の生産性を向上できる.Feature LocationやBug Localizationなど,タスクの記述に対応したモジュールを捜索する目的に対して,情報検索(IR)手法が利用されている.しかし,いずれも単一のタスク記述に対応するモジュールの捜索であり,タスク記述の集合を扱うマイルストーンコンテキスト捜索にはそのまま適用できない.本論文ではIR手法を用いてマイルストーンコンテキストを捜索する手法を複数提案し,その手法の分析を行う.提案手法では,マイルストーンに属する個々のタスク記述にIR手法を適用して得たランキングを,Data Fusion手法を用いて単一のランキングに統合することにより,マイルストーンコンテキストを捜索する.4つのOSSプロジェクトのタスク記述を用いた評価では,マイルストーンに属するタスク数に依らず,スコア合計に非零スコア数を乗じる統合手法mnzが提案した手法の中で最適となった.また,利用するIR手法の精度が高いほど,それを活用するマイルストーンコンテキスト捜索の精度も高くなった.
@article{watashun-sigss202101,
author = {渡辺 俊介 and 佐伯 元司 and 林 晋平},
title = {マイルストーンコンテキストの捜索のためのタスク記述群活用法の分析},
journal = {電子情報通信学会技術研究報告},
volume = 120,
number = 343,
pages = {37--42},
year = 2021,
month = {jan},
}
[watashun-sigss202101]: as a page
同一ドメインに属する複数の情報源から個別に構築した複数のシソーラスを1つのシソーラスに統合する方法を述べる.この統合したシソーラスは,要求獲得において多くの要求を獲得できるという点で有用であることを事例研究によって示す.
@article{jkato-sigse202107,
author = {加藤 潤三 and 佐伯 元司 and 大西 淳 and 林 晋平 and 海谷 治彦 and 山本 修一郎},
title = {要求獲得のためのシソーラスの統合},
journal = {情報処理学会研究報告},
volume = {2021-SE-208},
number = 4,
pages = {1--8},
year = 2021,
month = {jul},
}
[jkato-sigse202107]: as a page
ユースケース記述の品質はソフトウェア開発全体の品質やコストに関わり重要であるが,現実的には品質問題が発生している.その中でも,文章の長さや動詞の数といった構造上の特徴によらない問題については検出が難しく,自動化があまりなされてこなかった.本論文では,記述中の低品質と思われる箇所を指す不吉な臭いのうち,データの依存関係が原因なものを自動検出するために,動作の入出力となるデータをノードに保持した制御フローグラフを構築し検査を行うツールを開発した.本手法で作成したツールは,文中の動詞を辞書及び単語の類似度を用いて7種類に分類し,各分類に割り当てられた格フレームと記述の構造上の特徴に基づいてグラフを構築し,``更新するデータはそれ以前に作成されているべき''といったデータの依存関係を解析することで,不吉な臭いを自動検出する.また,作成したツールを評価するために,架空の鉄道会社による列車予約システムを表す5 つのユースケース記述を用いて調査を実施した.
@article{yotaro-sigse202103,
author = {関 洋太朗 and 林 晋平 and 佐伯 元司},
title = {データ依存解析によるユースケース記述中の不吉な臭い検出},
journal = {情報処理学会研究報告},
volume = {2021-SE-207},
number = 20,
pages = {1--8},
year = 2021,
month = {mar},
}
[yotaro-sigse202103]: as a page
共変更情報を用いた変更推薦にブランチが与える影響が指摘されているが、影響につながるブランチの特徴が明らかになっていない。本論文では、ブランチの上の変更をコミットごとに扱う場合と統合して単一のマージコミットとして扱う場合で共変更情報やそれに基づく変更推薦結果を比較し、変更推薦に影響を与えるブランチの特徴を明らかにする。既存のOSSリポジトリに対して、ブランチの扱い方ごとに変更推薦を行い、複数の評価指標で比較した。また、推薦ルールの生成に使用する部分変更履歴の特徴に対する変更推薦結果の傾向とブランチ上のコミットとマージコミットの共変更情報に対する将来の変更の関係から、変更推薦ではブランチ上のコミットで個別に扱うほうが適切だったことが多かった。
@article{k_isemoto-sigse202103,
author = {伊勢本 圭亮 and 小林 隆志 and 佐伯 元司 and 林 晋平},
title = {共変更に基づく変更推薦に対するブランチ戦略の影響分析},
journal = {情報処理学会研究報告},
volume = {2021-SE-207},
number = 24,
pages = {1--8},
year = 2021,
month = {mar},
}
[k_isemoto-sigse202103]: as a page
プログラム中の識別子名を変更する際,その変更に関連した別の識別子も変更すべきときがある.同時に変更すべき識別子を精度良く推薦するためには,プログラム中の識別子間の関係性や,必ずしも同一ではないものの意味的に関連した語を含む識別子への名前変更の可能性を考慮する必要がある.本論文では,同時変更が起きた識別子について,識別子間の関係性を調査し,精度向上に有効となりえる関係性を明らかにする.また,同一の語幹に基づく語形変化を考慮する場合としない場合について,同時変更の割合や識別子間の関係性を調査し,比較することで,語形変化の影響を明らかにする.調査の結果,同時変更全体では「同一クラスのメソッド同士」の関係性が 69.6% と最も多く出現された一方,クラス名を含む同時変更では「変数とその型」,変数を含む同時変更では「仮引数と実引数」の関係性が最も多く出現した.また,語形変化の考慮によらず,関係性の傾向はほとんど変わらなかった.この結果から,名前変更が起きた識別子の種類により,推薦する識別子の優先度を変更すべきであること,推薦に語形変化の影響はほとんどないことが示唆された.
@article{osumi-sigse202103,
author = {大住 祐輝 and 佐伯 元司 and 林 晋平},
title = {大規模名前変更データセットを用いた識別子名の同時変更の調査},
journal = {情報処理学会研究報告},
volume = {2021-SE-207},
number = 22,
pages = {1--8},
year = 2021,
month = {mar},
}
[osumi-sigse202103]: as a page
要求獲得手法の1つであるゴール指向要求分析法では,要求管理をゴールグラフで行うため,要求分析者は要求変更が発生した場合ゴールグラフを修正する必要がある.ゴールグラフには変更を加えるのが不適切なゴールが存在するが,どのゴールが変更してはいけないゴールなのかを知ることはできない.本論文では変更履歴を利用し,ゴール間の依存関係を定義することで,不適切な変更を回避する手法を提案する.提案手法をゴール指向要求分析法を支援するツールに拡張機能として実装し,オンラインショッピングシステムのゴールグラフを用いて,事例研究に基づく評価を行った.その結果,不適切なゴールの削除を回避することができた.
@article{y_yamazaki-sigss202103,
author = {山崎 友路 and 林 晋平 and 佐伯 元司},
title = {変更履歴とゴール間依存関係を用いたゴールへの不適切な変更操作の回避支援},
journal = {電子情報通信学会技術研究報告},
volume = 120,
number = 407,
pages = {96--101},
year = 2021,
month = {mar},
}
[y_yamazaki-sigss202103]: as a page
ソフトウェア開発における要求分析工程の支援の手段としてゴール指向要求分析法が存在する.ゴールグラフ中の不適切な詳細化の自動的な検出のため,Asanoらはグラフ構造の他にゴール間類似度による不適切な詳細化の検出を試みたが,格フレームを取得できない場合の対応・動詞や対象格以外の格の無視・辞書の語彙の量に関して問題があった.そこで本論文ではこれらの問題に対応した新しい類似度の算出方法を2 種類提案する.1つはゴール記述を単語毎に分解し単語の頻度による加重平均により,もう一方は係り受け解析による文節の抽出により類似度計算を行うもので,それぞれ大規模コーパスで学習したWord2vecを用いる.提案した類似度算出手法を不適切な詳細化を混入させたゴールグラフに対して適用し評価実験を行った.その結果,類似度が算出可能な枝は既存手法より大幅に増加し,提案した手法の内の1つでは検出性能が向上した.
@article{iijima-sigss202103,
author = {飯島 慧 and 林 晋平 and 佐伯 元司},
title = {不適切なゴール詳細化検出のためのゴール記述類似度算出法の比較},
journal = {電子情報通信学会技術研究報告},
volume = 120,
number = 407,
pages = {67--72},
year = 2021,
month = {mar},
}
[iijima-sigss202103]: as a page
デバッグ作業中に修正すべきソースコードを特定する作業をバグ限局という.情報検索に基づく既存のバグ限局手法の多くは,入力となるバグレポートが英語以外で記述されていることを想定しておらず,非英語を対象とした言語横断バグ限局手法の評価例は少ない.本論文では,言語横断バグ限局手法で利用可能なデータセットをGitHubを用いて構築した.構築した多言語データセットに機械翻訳を適用し,既存のバグ限局手法を適用して得られる結果をもとに,9言語を対象とした言語横断バグ限局手法の性能評価を行った.その結果,ソースコード上の識別子名が英語で記述されているという前提の下では,英語で記述されているバグレポートを対象とした既存のバグ限局手法と同様の精度で言語横断バグ限局が有効であることが分かった.
@article{r_mitsui-sigss202101,
author = {三井 亮称 and 佐伯 元司 and 林 晋平},
title = {多言語データセットを用いた言語横断バグ限局の性能評価},
journal = {電子情報通信学会技術研究報告},
volume = 120,
number = 343,
pages = {31--36},
year = 2021,
month = {jan},
}
[r_mitsui-sigss202101]: as a page
コードクローンリファクタリングの既存手法では,リファクタリングの事前条件を緩和するためラムダ式を取り入れ抽出範囲を最大化しているが,条件の緩和が可読性の低下につながる恐れがある.事前条件の緩和と抽出対象範囲の調整による事前条件違反の回避の組合せを考慮することで,1つのクローンペアから複数通りのリファクタリング候補が得られる場合がある.本論文では,事前条件違反に基づき,条件の緩和と違反の回避の組合せによる複数のリファクタリング候補生成手法を提案する. そして,定量的分析と開発者による評価によりリファクタリング候補を複数生成することの有用性を調査した. その結果,リファクタリングの効果と可読性の間にはトレードオフの存在が認められた.また,開発者は条件の緩和と違反の回避を状況によって使い分けており,リファクタリング候補を複数生成することは有用であることがわかった.
@article{yutarootani-sigss202103,
author = {大谷 悠太郎 and 林 晋平},
title = {複数観点を考慮したクローンリファクタリング支援},
journal = {電子情報通信学会技術研究報告},
volume = 120,
number = 407,
pages = {61--66},
year = 2021,
month = {mar},
}
[yutarootani-sigss202103]: as a page
@misc{higo-icse2021,
author = {Yoshiki Higo and Shinpei Hayashi and Hideaki Hata and Meiyappan Nagappan},
title = {{Ammonia}: An Approach for Deriving Project Specific Bug Patterns},
howpublished = {Journal-First Track at 43rd International Conference on Software Engineering},
year = 2021,
month = {may},
}
[higo-icse2021]: as a page
背景: ソースコードに記述されたプログラム要素の履歴追跡は重要である.リポジトリ変換に基づく履歴追跡手法Historageは,開発者が慣れ親しんだ履歴管理インタフェースにより細粒度の履歴追跡を行える.問題: 既存のリポジトリ変換ツールは,変換時にオブジェクトデータベースからのファイルの展開と格納を繰り返すコストが大きい,増分的な変換を行わないため変換元リポジトリの更新を伴う開発での利用に適さない,という点で変換時間に課題がある.手法: 本論文では,変換時間の削減を実現したリポジトリ変換ツールの設計と実装について述べる.提案ツールでは,オブジェクト変換の対応関係の記録に基づくリポジトリ変換ライブラリgit-steinを利用することにより,不要な展開と格納を抑制する.また,更新前の変換時に記録した対応関係を更新後に持ち越し,それを用いて不要な変換を抑制することにより増分的な変換を可能とする.評価: 既存の変換ツールと実行時間の比較を行った.また,持ち越す対応関係の種類を比較し,その得失を議論した.
Background: It is important to track the history of program elements in source code. Historage, a history tracking approach based on repository transformation, enables developers to use a familiar interface to track a finer-grained history. Problem: Existing repository transformation tools have performance issues: (1) their transformation steps include the expansion and archiving of snapshots from the object database, and (2) they cannot transform repositories incrementally, which are unsuitable when using them for supporting software development activities. Method: In this paper, we describe the design and implementation of a transformation tool that reduces the transformation time. We use git-stein, a repository transformation library based on the recording of mapping between objects, to suppress unnecessary expansion and archiving of files. In addition, we save the mapping and use it later to support incremental transformation. Evaluation: We compared the transformation time of our tool with existing one. Furthermore, we compared performance when using different kinds of mappings to be stored.
@inproceedings{shiba-ncjssst2021,
author = {柴 駿太 and 林 晋平},
title = {細粒度履歴追跡のための増分的なリポジトリ変換ツールの設計と実装},
booktitle = {日本ソフトウェア科学会第38回大会講演論文集},
pages = {1--10},
year = 2021,
month = {sep},
}
[shiba-ncjssst2021]: as a page
@article{hayashi-jip202104,
author = {Shinpei Hayashi},
title = {Editor's Message to Special Issue of Software Engineering},
journal = {Journal of Information Processing},
volume = 29,
pages = 295,
year = 2021,
month = {apr},
}
[hayashi-jip202104]: as a page
@article{hayashi-ipsjj202104,
author = {林 晋平},
title = {特集「ソフトウェア工学」の編集にあたって},
journal = {情報処理学会論文誌},
volume = 62,
number = 4,
pages = 981,
year = 2021,
month = {apr},
}
[hayashi-ipsjj202104]: as a page
@article{hayashi-scico202112,
author = {Shinpei Hayashi and Michael L.Collard},
title = {Special Issue on Software Maintenance Tools at 35th International Conference on Software Maintenance and Evolution ({ICSME 2019})},
journal = {Science of Computer Programming},
volume = 212,
pages = {102706:1--2},
year = 2021,
month = {dec},
}
[hayashi-scico202112]: as a page
@misc{h_tatsugi-fose2021,
author = {林 辰宜 and ドゥルバドラハ テムーレン and 林 晋平},
title = {複合メトリクスのトレンド分析の効率化に向けて:モジュール腐敗度への適用},
howpublished = {In 第28回ソフトウェア工学の基礎ワークショップ},
year = 2021,
month = {nov},
}
[h_tatsugi-fose2021]: as a page
@article{ugawa-jssst202002,
author = {鵜川 始陽 and 馬谷 誠二 and 中澤 仁 and 林 晋平 and 番原 睦則},
title = {特集「ソフトウェア論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 37,
number = 1,
pages = 53,
year = 2020,
month = {feb},
}
[ugawa-jssst202002]: as a page
Recording source code changes comes to be well recognized as an effective means for understanding the evolution of existing software and making its future changes efficient. Therefore, modern integrated development environments (IDEs) tend to employ tools that record fine-grained textual changes of source code. However, there is still no satisfactory tool that accurately records textual changes. We propose ChangeMacroRecorder that automatically and silently records all textual changes of source code and in real time correlates those textual changes with actions causing them while a programmer is writing and modifying it on the Eclipse’s Java editor. The improvement with respect to the accuracy of recorded textual changes enables both programmers and researchers to exactly understand how the source code was evolved. This paper presents detailed information on how ChangeMacroRecorder achieves the accurate recording of textual changes and demonstrates how accurate textual changes were recorded in our experiment consisting of nine programming tasks.
@article{maruyama-ieicet202011,
author = {Katsuhisa Maruyama and Shinpei Hayashi and Takayuki Omori},
title = {{ChangeMacroRecorder}: Accurate Recording of Fine-Grained Textual Changes of Source Code},
journal = {IEICE Transactions on Information and Systems},
volume = {E103-D},
number = 11,
pages = {2262--2277},
year = 2020,
month = {nov},
}
[maruyama-ieicet202011]: as a page
Method-level historical information is useful in various research on mining software repositories such as fault-prone module detection or evolutionary coupling identification. An existing technique named Historage converts a Git repository of a Java project to a finer-grained one. In a finer-grained repository, each Java method exists as a single file. Treating Java methods as files has an advantage, which is that Java methods can be tracked with Git mechanisms. The biggest benefit of tracking methods with Git mechanisms is that it can easily connect with any other tools and techniques build on Git infrastructure. However, Historage's tracking has an issue of accuracy, especially on small methods. More concretely, in the case that a small method is renamed or moved to another class, Historage has a limited capability to track the method. In this paper, we propose a new technique, FinerGit, to improve the trackability of Java methods with Git mechanisms. We implement FinerGit as a system and apply it to 182 open source software projects, which include 1,768K methods in total. The experimental results show that our tool has a higher capability of tracking methods in the case that methods are renamed or moved to other classes.
@article{higo-jss202007,
author = {Yoshiki Higo and Shinpei Hayashi and Shinji Kusumoto},
title = {On Tracking {Java} Methods with {Git} Mechanisms},
journal = {Journal of Systems and Software},
volume = 165,
number = 110571,
pages = {1--13},
year = 2020,
month = {jul},
}
[higo-jss202007]: as a page
Finding and fixing buggy code is an important and cost-intensive maintenance task, and static analysis (SA) is one of the methods developers use to perform it. SA tools warn developers about potential bugs by scanning their source code for commonly occurring bug patterns, thus giving those developers opportunities to fix the warnings (potential bugs) before they release the software. Typically, SA tools scan for general bug patterns that are common to any software project (such as null pointer dereference), and not for project specific patterns. However, past research has pointed to this lack of customizability as a severe limiting issue in SA. Accordingly, in this paper, we propose an approach called Ammonia, which is based on statically analyzing changes across the development history of a project, as a means to identify project-specific bug patterns. Furthermore, the bug patterns identified by our tool do not relate to just one developer or one specific commit, they reflect the project as a whole and compliment the warnings from other SA tools that identify general bug patterns. Herein, we report on the application of our implemented tool and approach to four Java projects: Ant, Camel, POI, and Wicket. The results obtained show that our tool could detect 19 project specific bug patterns across those four projects. Next, through manual analysis, we determined that six of those change patterns were actual bugs and submitted pull requests based on those bug patterns. As a result, five of the pull requests were merged.
@article{higo-emse202003,
author = {Yoshiki Higo and Shinpei Hayashi and Hideaki Hata and Meiyappan Nagappan},
title = {{Ammonia}: An Approach for Deriving Project-Specific Bug Patterns},
journal = {Empirical Software Engineering},
volume = 25,
number = 3,
pages = {1951--1979},
year = 2020,
month = {mar},
}
[higo-emse202003]: as a page
To improve the usability of a revision history, change untangling, which reconstructs the history to ensure that changes in each commit belong to one intentional task, is important. Although there are several untangling approaches based on the clustering of fine-grained editing operations of source code, they often produce unsuitable result for a developer, and manual tailoring of the result is necessary. In this paper, we propose ChangeBeadsThreader (CBT), an interactive environment for splitting and merging change clusters to support the manual tailoring of untangled changes. CBT provides two features: 1) a two-dimensional space where fine-grained change history is visualized to help users find the clusters to be merged and 2) an augmented diff view that enables users to confirm the consistency of the changes in a specific cluster for finding those to be split. These features allow users to easily tailor automatically untangled changes.
@inproceedings{yamashita-saner2020,
author = {Satoshi Yamashita and Shinpei Hayashi and Motoshi Saeki},
title = {{ChangeBeadsThreader}: An Interactive Environment for Tailoring Automatically Untangled Changes},
booktitle = {Proceedings of the 27th IEEE International Conference on Software Analysis, Evolution and Reengineering},
pages = {657--661},
year = 2020,
month = {feb},
}
[yamashita-saner2020]: as a page
@misc{yotaro-ses2020,
author = {関 洋太朗 and 林 晋平 and 佐伯 元司},
title = {ユースケース記述中の不吉な臭いの検出},
howpublished = {ソフトウェアエンジニアリングシンポジウム2020},
year = 2020,
month = {sep},
}
[yotaro-ses2020]: as a page
@misc{higo-jss-happyhour202012,
author = {Yoshiki Higo and Shinpei Hayashi and Shinji Kusumoto},
title = {On Tracking {Java} Methods with {Git} Mechanisms},
howpublished = {JSS Happpy Hour},
year = 2020,
month = {dec},
}
[higo-jss-happyhour202012]: as a page
開発者は同一の目的に対してどのようなソースコード記述方法を選択しているのかがソースコード解析ツールを用いて調査されてきた.しかし,既存のソースコード解析ツールの多くは多様なプログラミング言語への適用可能性を備えておらず,それらツールを用いて行われる調査の対象は特定の言語に限定されてしまっている.本稿では,既存のソースコード解析ツールMPAnalyzerにおける変更パターン抽出を対象に,多様な言語への対応を行う労力を削減する手法を提案する.提案手法では,ソースコード正規化をはじめとした,パターン抽出において適用対象言語ごとに実装されている機能を,対象とするプログラミング言語の形式文法と,文法に基づく抽出ルールを入力として与えることにより統一的に実現する.提案手法をライブラリnitronとして実装し,複数の言語からのパターン抽出を実現した.また,提案手法の利用により新しい言語への対応のための労力が軽減されることを確認した.
@article{ando-n-sigss202007,
author = {安藤 直樹 and 佐伯 元司 and 林 晋平},
title = {多様なプログラミング言語に適用可能なソースコード変更パターン抽出手法},
journal = {電子情報通信学会技術研究報告},
volume = 120,
number = 82,
pages = {7--12},
year = 2020,
month = {jul},
}
[ando-n-sigss202007]: as a page
版間で実施されたリファクタリングを自動で検出する既存の手法は,関数の内部に閉じて行われるローカルリファクタリングの大部分を検出できない.本論文では,まず123のオープンソースソフトウェアリポジトリからローカルリファクタリングの実例を抽出し,ローカルリファクタリングを構成する原始的な操作を特定した.また,版間のローカルリファクタリングの検出を原始的な操作列を探索する問題と捉え,版間の差分を最小化する操作列を導出する手法を提案する.探索空間中の状態には,ソースコードの構造を表した抽象構文木を用いる.評価関数には,探索中の操作列適用後の版と最終状態の版との間の編集距離を推定して用いる.具体的な探索手法としてA探索とビームサーチを用いて提案手法の評価を行った.
@article{tsutsui_y-sigse202003,
author = {筒井 湧暉 and セーリム ナッタウット and 林 晋平 and 佐伯 元司},
title = {探索に基づくローカルリファクタリングの検出},
journal = {情報処理学会研究報告},
volume = {2020-SE-204},
number = 4,
pages = {1--8},
year = 2020,
month = {mar},
}
[tsutsui_y-sigse202003]: as a page
@misc{yutarootani-fose2020,
author = {大谷 悠太郎 and 佐伯 元司 and 林 晋平},
title = {複数の品質指標を考慮したクローンリファクタリングに向けて},
howpublished = {第27回ソフトウェア工学の基礎ワークショップ},
year = 2020,
month = {nov},
}
[yutarootani-fose2020]: as a page
@misc{watashun-fose2020,
author = {渡辺 俊介 and 佐伯 元司 and 林 晋平},
title = {複数のタスク情報を用いたマイルストーンコンテキストの捜索法の確立に向けて},
howpublished = {第27回ソフトウェア工学の基礎ワークショップ},
year = 2020,
month = {nov},
}
[watashun-fose2020]: as a page
The Open Web Application Security Project (OWASP) defines Static Application Security Testing (SAST) tools as those that can help find security vulnerabilities in the source code or compiled code of software. Such tools detect and classify the vulnerability warnings into one of many types (e.g., input validation and representation). It is well known that these tools produce high numbers of false positive warnings. However, what is not known is if specific types of warnings have a higher predisposition to be false positives or not. Therefore, our goal is to investigate the different types of SAST-produced warnings and their evolution over time to determine if one type of warning is more likely to have false positives than others. To achieve our goal, we carry out a large empirical study where we examine 116 large and popular C++ projects using six different state-of-the-art open source and commercial SAST tools that detect security vulnerabilities. In order to track a piece of code that has been tagged with a warning, we use a new state of the art framework called cregit+ that traces source code lines across different commits. The results demonstrate the potential of using SAST tools as an assessment tool to measure the quality of a product and the possible risks without manually reviewing the warnings. In addition, this work shows that pattern-matching static analysis technique is a very powerful method when combined with other advanced analysis methods.
@article{bushra-jss201912,
author = {Bushra Aloraini and Meiyappan Nagappan and Daniel M. German and Shinpei Hayashi and Yoshiki Higo},
title = {An Empirical Study of Security Warnings from Static Application Security Testing Tools},
journal = {Journal of Systems and Software},
volume = 158,
pages = {1--25},
year = 2019,
month = {dec},
}
[bushra-jss201912]: as a page
大規模なソフトウェア開発では,ある特定のバグを解決するために修正すべきソースコード箇所を見つけるBug Localizationが必要である.情報検索に基づくBug Localization手法(IR手法)は,バグに関して記述されたバグレポートとソースコード内のモジュールとのテキスト類似度を計算し,これに基づき修正すべきモジュールを特定する.しかし,この手法は各モジュールのバグ含有可能性を考慮していないため精度が低い.本論文では,ソースコード内のモジュールのバグ含有可能性として不吉な臭いを用い,これを既存のIR手法と組み合わせたBug Localization手法を提案する.提案手法では,不吉な臭いの深刻度と,ベクトル空間モデルに基づくテキスト類似度を統合した新しい評価値を定義している.これは深刻度の高い不吉な臭いとバグレポートとの高いテキスト類似性の両方を持つモジュールを上位に位置付け,バグを解決するために修正すべきモジュールを予測する.4つのOSSプロジェクトの過去のバグレポートを用いた評価では,いずれのプロジェクト,モジュール粒度においても提案手法の精度が既存のIR手法を上回り,クラスレベルとメソッドレベルでそれぞれ平均22\%,137\%の向上がみられた.また,不吉な臭いがBug Localizationに与える影響について調査を行った.
Bug localization is a technique that has been proposed to support the process of identifying the locations of bugs specied in a bug report. For example, information retrieval (IR)-based bug localization approaches suggest potential locations of the bug based on the similarity between the bug description and the source code. However, while many approaches have been proposed to improve the accuracy, the likelihood of each module having a bug is often overlooked or they are treated equally, whereas this may not be the case. For example, modules having code smells have been found to be more prone to changes and bugs. Therefore, in this paper, we propose a technique to leverage code smells to improve bug localization. By combining the code smell severity with the textual similarity from IR-based bug localization, we can identify the modules that are not only similar to the bug description but also have a higher likelihood of containing bugs. Our case study on four open source projects shows that our technique can improve the baseline IR-based approach by 22\% and 137\% on average for class and method levels, respectively. In addition, we conducted investigations concerning the effect of code smell on bug localization.
@article{takahashi-a-at-ipsjj201904,
author = {高橋 碧 and セーリム ナッタウット and 林 晋平 and 佐伯 元司},
title = {情報検索に基づくBug Localizationへの不吉な臭いの利用},
journal = {情報処理学会論文誌},
volume = 60,
number = 4,
pages = {1040--1050},
year = 2019,
month = {apr},
}
[takahashi-a-at-ipsjj201904]: as a page
@article{ishikawa-jssst201902,
author = {石川 冬樹 and 鵜川 始陽 and 馬谷 誠二 and 小宮 常康 and 林 晋平 and 横山 大作},
title = {特集「ソフトウェア論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 36,
number = 1,
pages = 37,
year = 2019,
month = {feb},
}
[ishikawa-jssst201902]: as a page
A decaying module refers to a module whose quality is getting worse and is likely to become smelly in the future. The concept has been proposed to mitigate the problem that developers cannot track the progression of code smells and prevent them from occurring. To support developers in proactive refactoring process to prevent code smells, a prediction approach has been proposed to detect modules that are likely to become decaying modules in the next milestone. Our prior study has shown that modules that developers will modify as an estimation of developers' context can be used to improve the performance of the prediction model significantly. Nevertheless, it requires the developer who has perfect knowledge of locations of changes to manually specify such information to the system. To this end, in this study, we explore the use of automated impact analysis techniques to estimate the developers' context. Such techniques will enable developers to improve the performance of the decaying module prediction model without the need of perfect knowledge or manual input to the system. Furthermore, we conduct a study on the relationship between the accuracy of an impact analysis technique and its effect on improving decaying module prediction, as well as the future direction that should be explored.
@inproceedings{natthawute-icsme2019,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {Can Automated Impact Analysis Techniques Help Predict Decaying Modules?},
booktitle = {Proceedings of the 35th IEEE International Conference on Software Maintenance and Evolution},
pages = {541--545},
year = 2019,
month = {oct},
}
[natthawute-icsme2019]: as a page
Use case modeling is very popular to represent the functionality of the system to be developed, and it consists of two parts: use case diagram and use case description. Use case descriptions are written in structured natural language (NL), and the usage of NL can lead to poor descriptions such as ambiguous, inconsistent and/or incomplete descriptions, etc. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of produced use case models. This paper proposes a technique to automate detecting bad smells of use case descriptions, symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalogue of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contribution of this paper is the automated detection of bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and recall ratio of 0.981.
@inproceedings{yotaro-re2019,
author = {Yotaro Seki and Shinpei Hayashi and Motoshi Saeki},
title = {Detecting Bad Smells in Use Case Descriptions},
booktitle = {Proceedings of the 27th IEEE International Requirements Engineering Conference},
pages = {98--108},
year = 2019,
month = {sep},
}
[yotaro-re2019]: as a page
@misc{takahashi-a-at-iwesep2019,
author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {Investigating Effective Usages of Code Smell Information for Bug Localization},
howpublished = {Presented at 10th International Workshop on Empirical Software Engineering in Practice},
year = 2019,
month = {dec},
}
[takahashi-a-at-iwesep2019]: as a page
@misc{yamashita-iwesep2019,
author = {Satoshi Yamashita and Shinpei Hayashi and Motoshi Saeki},
title = {An Interactive Environment for Tailoring Automatically Untangled Changes},
howpublished = {Presented at 10th International Workshop on Empirical Software Engineering in Practice},
year = 2019,
month = {dec},
}
[yamashita-iwesep2019]: as a page
@misc{yutarootani-iwesep2019,
author = {Yutaro Otani and Motoshi Saeki and Shinpei Hayashi},
title = {Toward Automated Refactoring of Clone Groups},
howpublished = {Presented at 10th International Workshop on Empirical Software Engineering in Practice},
year = 2019,
month = {dec},
}
[yutarootani-iwesep2019]: as a page
While extracting a subset of a commit history, specifying the necessary portion is a time-consuming task for developers. Several commit-based history slicing techniques have been proposed to identify dependencies between commits and to extract a related set of commits using a specific commit as a slicing criterion. However, the resulting subset of commits become large if commits for systematic edits whose changes do not depend on each other exist. We empirically investigated the impact of systematic edits on history slicing. In this study, commits in which systematic edits were detected are split between each file so that unnecessary dependencies between commits are eliminated. In several histories of open source systems, the size of history slices was reduced by 13.3-57.2\% on average after splitting the commits for systematic edits.
@inproceedings{rfunaki-msr2019,
author = {Ryosuke Funaki and Shinpei Hayashi and Motoshi Saeki},
title = {The Impact of Systematic Edits in History Slicing},
booktitle = {Proceedings of the 16th International Conference on Mining Software Repositories},
pages = {555--559},
year = 2019,
month = {may},
}
[rfunaki-msr2019]: as a page
Source code quality is often measured using code smell, which is an indicator of design flaw or problem in the source code. Code smells can be detected using tools such as static analyzer that detects code smells based on source code metrics. Further, developers perform refactoring activities based on the result of such detection tools to improve source code quality. However, such approach can be considered as reactive refactoring, i.e., developers react to code smells after they occur. This means that developers first suffer the effects of low quality source code (e.g., low readability and understandability) before they start solving code smells. In this study, we focus on proactive refactoring, i.e., refactoring source code before it becomes smelly. This approach would allow developers to maintain source code quality without having to suffer the impact of code smells. To support the proactive refactoring process, we propose a technique to detect decaying modules, which are non-smelly modules that are about to become smelly. We present empirical studies on open source projects with the aim of studying the characteristics of decaying modules. Additionally, to facilitate developers in the refactoring planning process, we perform a study on using a machine learning technique to predict decaying modules and report a factor that contributes most to the performance of the model under consideration.
@inproceedings{natthawute-iwor2019,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {Toward Proactive Refactoring: An Exploratory Study on Decaying Modules},
booktitle = {Proceedings of the 3rd International Workshop on Refactoring},
pages = {39--46},
year = 2019,
month = {may},
}
[natthawute-iwor2019]: as a page
@misc{takahashi-a-at-ses2019,
author = {高橋 碧 and セーリム ナッタウット and 林 晋平 and 佐伯 元司},
title = {情報検索に基づくBug Localizationへの不吉な臭いの利用},
howpublished = {ソフトウェアエンジニアリングシンポジウム2019},
year = 2019,
month = {aug},
}
[takahashi-a-at-ses2019]: as a page
大規模なソフトウェア開発において,特定のバグを解決するために修正すべきモジュールを探し出す Bug Localization(BL)は重要である.本稿では,大規模なデータセットにおいて,不吉な臭いを利用したBL手法を利用し,BLに対して不吉な臭いの検出結果を使用する際の有用性を解明する.具体的には,不吉な臭いの有用な利用方法を明らかにし,どのようなBL手法に対して組み合わせることに効用があるかを調べた.これにより,BLに対して, まだ広く利用されていない不吉な臭いの有用性を調査する.実際に,利用方法においては,複数粒度の不吉な臭いの深刻度の最大値を使用することが優れた結果が得られることがわかった.また,実際に既存のBL手法に対して不吉な臭いの情報を組み合わせることにより,精度を向上させられることも確認した.
@article{takahashi-a-at-sigss201910,
author = {高橋 碧 and セーリム ナッタウット and 林 晋平 and 佐伯 元司},
title = {Bug Localizationに対して不吉な臭いを使用する有用性の解明},
journal = {電子情報通信学会技術研究報告},
volume = 119,
number = 246,
pages = {1--6},
year = 2019,
month = {oct},
}
[takahashi-a-at-sigss201910]: as a page
ユースケース記述の品質向上は,ソフトウェア開発において品質向上や開発コスト削減に関わり重要である.しかし,現実的にはユースケース記述の品質問題は発生しており,品質の低い箇所を手動で検出する作業には手間がかかる.本論文では,記述中の品質の低い箇所を不吉な臭いとして体系的に定義するとともに,その検出の自動化を試みる.不吉な臭いの定義をするために,8つのユースケース記述中の低品質箇所を調査し,その要因と出現箇所に基づき分類を行った.また,その分類に基づき計61種類の臭いを体系的に定義し,GQM法を用いて記述の特徴を数値化するメトリクス及び不吉な臭いと判定するための述語を開発し,定義した臭いのうち25種類を検出するツールを実装した.
@article{yotaro-sigse201903,
author = {関 洋太朗 and 林 晋平 and 佐伯 元司},
title = {ユースケース記述中の不吉な臭いの体系化と検出},
journal = {情報処理学会研究報告},
volume = {2019-SE-201},
number = 3,
pages = {1--8},
year = 2019,
month = {mar},
}
[yotaro-sigse201903]: as a page
コミット履歴の一部を取り出して再構成する上で,再構成に必要な最小限の部分履歴を特定することは,開発者の大きな負担になる.本稿では,開発者の部分コミット履歴の再構成を支援する手法およびそのツールを提案する.本手法では,変更間の依存関係をテキストレベルやビルドレベルで解析し,再構成に必要な部分コミット履歴を事前に特定することで再構成の失敗を防ぐ.また,依存の広範囲への波及による部分履歴の肥大化を抑えるため,Systematic Editsの検出により削減可能な依存の候補を特定する.手法を実現するツールを実装し,複数のOSSで評価した.ビルド依存解析の精度は94\%以上,部分履歴の削減率は最大43.8\%という結果が得られ,本手法による支援が有用であることを確認した.
@article{rfunaki-sigss201901,
author = {舟木 亮介 and 林 晋平 and 佐伯 元司},
title = {依存関係を考慮した部分コミット履歴の再構成支援},
journal = {電子情報通信学会技術研究報告},
volume = 118,
number = 385,
pages = {37--42},
year = 2019,
month = {jan},
}
[rfunaki-sigss201901]: as a page
不吉な臭いの検出をはじめとして、ソフトウェアメトリクスは保守性の定量化に広く利用されている。メトリクス値の計測ツールの多くは、ある時点でのソースコードを対象に計測を行う。一方、実際の開発では並行して開発が行われる。並行している変更がマージされたソースコードがないと既存の計測ツールでは定量化できない。本論文では、マージされていないブランチからマージ後のメトリクス値を推定し、プロアクティブな保守性の把握を試みる。提案手法で得られた推定値を実際にマージされたブランチのメトリクス値と比較し、87.6\%と高い割合で一致した。また、提案手法を用いることにより、マージされるまで顕在しなかった不吉の臭いの早期検出に2例成功した。
@article{k_isemoto-sigss201903,
author = {伊勢本 圭亮 and 佐伯 元司 and 林 晋平},
title = {ブランチを考慮したプロアクティブなソフトウェアメトリクス値の算出},
journal = {電子情報通信学会技術研究報告},
volume = 118,
number = 471,
pages = {73--78},
year = 2019,
month = {mar},
}
[k_isemoto-sigss201903]: as a page
リファクタリングに関する様々な実証的研究が行われている.実証的研究を行うためには,リファクタリングの実例を収集する必要がある.既存の手法やデータセットは,十分に詳細化された多種・高精度のデータを得られないため,それらを得るには手動でデータ作成を行う必要がある.しかし,手動によるデータ作成は,大きな労力を要する.本論文では,不完全なリファクタリング実例データを修正・補完して,実証的研究に利用可能なように詳細化するための環境RefactorHubを提案する.RefactorHubは,手動によるデータ修正・補完を支援し,十分に詳細化された多種・高精度のデータセットを得ることを目的とする.RefactorHubでは,リファクタリングのパラメータの型が抽象構文木に基づき定義されており,これに基づき構文要素を選択させるインターフェースを用いることにより,誤りの少ないデータの修正・補完を行う.実際にRefactorHubを利用して,種類と説明のみを持つリファクタリング実例データに対し,補完を行い,詳細化することができた.また,その際にかかった時間は,リファクタリングインスタンス1つにつき平均2.4分と短かった.
@article{kuramoto-sigss201903,
author = {倉本 涼 and 林 晋平 and 佐伯 元司},
title = {リファクタリング実証的研究のためのデータセット作成環境},
journal = {電子情報通信学会技術研究報告},
volume = 118,
number = 471,
pages = {67--72},
year = 2019,
month = {mar},
}
[kuramoto-sigss201903]: as a page
保守性や履歴分析の信頼性向上のため,リポジトリ中の改版履歴を1コミット1タスクに対応するよう構造化し直すこと,いわゆるChange Untanglingは重要である.Change Untanglingの自動化に関する研究は行われているが,結果の最適化は困難であり開発者による手動での最適化を行う必要がある.本論文ではこのための支援を行うために,最終的に1つのコミットを構成する細粒度変更をクラスタ化する環境を提案する.本環境は,変更の発生時刻を時間軸,変更箇所を空間軸として,2次元に配置することにより大量の細粒度変更を可視化した環境を提供する.これによりクラスタへの追加や除去の対象となる細粒度変更の発見を支援する.
@article{yamashita-sigss201910,
author = {山下 慧 and 林 晋平 and 佐伯 元司},
title = {Change Untangling結果の対話的最適化支援環境の試作},
journal = {電子情報通信学会技術研究報告},
volume = 119,
number = 246,
pages = {13--18},
year = 2019,
month = {oct},
}
[yamashita-sigss201910]: as a page
2018年12月4~7日奈良にて開催された第25回アジア太平洋ソフトウェア工学国際会議(APSEC 2018)に関して,主催者側および参加者側からの見解を述べる.
This paper gives our views on the 25th Asia-Pacific Software Engineering Conference (APSEC 2018) held at Nara on December 4-7, 2018.
@article{maruyama-sigse201903,
author = {丸山 勝久 and 鵜林 尚靖 and 鷲崎 弘宜 and 肥後 芳樹 and 林 晋平 and 亀井 靖高 and 堀田 圭佑},
title = {第25回アジア太平洋ソフトウェア工学国際会議(APSEC 2018)開催および参加報告},
journal = {情報処理学会研究報告},
volume = {2019-SE-201},
number = 3,
pages = {1--8},
year = 2019,
month = {mar},
}
[maruyama-sigse201903]: as a page
ソフトウェア開発では,開発の早い段階でフォールト(バグ)を見つけて修正することが重要となる.そのため,潜在的なフォールトの場所を予測するFault-Prediction技術に注目が集まっている.Fault-Prediction技術では,ファイルやモジュールごとに,潜在的なフォールトの存在確率を表す「潜在フォールト率」を算出する.算出した潜在フォールト率が高いファイルやモジュールを重点的にテストやレビューすることで,開発の早い段階でのフォールト発見が可能となる.近年利用が進んでいるFault-Prediction技術では,静的解析ツールで検出できるコーディング規約違反や既知のフォールトの情報を基に,潜在フォールト率を算出する.しかしながら,開発の立ち上げ直後は,潜在フォールト率の算出に利用するコーディング規約違反やフォールトの情報が少ないため,従来技術では潜在フォールト率を算出できない.そこで本研究では,「開発者」が過去に携わったプロジェクトで作り込んだ指摘やフォールトなどの情報を基に,ファイルごとの潜在フォールト率を算出するFault-Prediction技術を提案・評価した.本技術では,開発の立ち上げ直後でも潜在フォールト率を算出できる.企業内のある1プロジェクトで提案手法を評価したところ,本手法で算出した潜在フォールト率が高い上位25\%のファイルに,全フォールトの約6割が含まれることが分かった.提案手法により,潜在フォールト率が高いファイルから順にテストやレビューを実施することで,開発の早い段階でのフォールトの発見・修正が可能となる.
@inproceedings{takai-ss2019,
author = {高井 康勢 and 加藤 正恭 and 三部 良太 and 林 晋平 and 小林 隆志},
title = {開発者に着目したFault-Prediction技術},
booktitle = {ソフトウェア・シンポジウム2019 in 熊本 予稿集},
pages = {1--8},
year = 2019,
month = {jun},
}
[takai-ss2019]: as a page
@proceedings{minelli-maint2019,
title = {2nd IEEE International Workshop on Mining and Analyzing Interaction Histories (MAINT 2019)},
year = 2019,
month = {feb},
}
[minelli-maint2019]: as a page
Code smells are indicators of design flaws or problems in the source code. Various tools and techniques have been proposed for detecting code smells. These tools generally detect a large number of code smells, so approaches have also been developed for prioritizing and filtering code smells. However, lack of empirical data detailing how developers filter and prioritize code smells hinders improvements to these approaches. In this study, we investigated ten professional developers to determine the factors they use for filtering and prioritizing code smells in an open source project under the condition that they complete a list of five tasks. In total, we obtained 69 responses for code smell filtration and 50 responses for code smell prioritization from the ten professional developers. We found that Task relevance and Smell severity were most commonly considered during code smell filtration, while Module importance and Task relevance were employed most often for code smell prioritization. These results may facilitate further research into code smell detection, prioritization, and filtration to better focus on the actual needs of developers.
@article{natthawute-ieicet201807,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {An Investigative Study on How Developers Filter and Prioritize Code Smells},
journal = {IEICE Transactions on Information and Systems},
volume = {E101-D},
number = 7,
pages = {1733--1742},
year = 2018,
month = {jul},
}
[natthawute-ieicet201807]: as a page
要求仕様書は主に自然言語で記述されているため,文意のあいまい性などの問題がある.高品質な要求仕様書を得るためには,要求分析者がこれらの問題点を認識し発見することが重要である.本論文ではIEEE 830で定義された品質特性をもとに,日本語で記述された要求仕様書の問題点を検出する手法およびその自動化ツールreqcheckerを提案する.提案手法では,要求仕様書全体と要求文の解析,さらに要求文間の関係解析を行い要求仕様書中の問題点を検出する.reqcheckerでは非あいまい性など6つの品質特性に関する問題点を検出しマーキングを行うことで,要求分析者による問題点の発見を支援する.例題への適用および被験者実験によりreqcheckerの有用性に関する予備評価を行ったところ,良好な結果を得た.
Some requirements specification documents have several problems such as the ambiguity of sentences because they are mainly written in natural language. It is important for requirements analysts to find and analyze these problems. In this paper, we propose a technique for detecting problems in a requirements specification documents based on the quality characteristics defined in IEEE 830, using the syntactical structure of the specification. Our technique analyzes the structure and relationships of the sentences and the whole of the given specification. A specification checker named reqchecker that automates our technique can support to find the problems over six quality characteristics. The preliminary evaluation results show that reqchecker has acceptable detection accuracy and high supporting effects for some particular quality characteristics.
@article{hayashi-ieicet201801,
author = {林 晋平 and 有賀 顕 and 佐伯 元司},
title = {reqchecker:IEEE 830の品質特性に基づく日本語要求仕様書の問題点検出ツール},
journal = {電子情報通信学会論文誌},
volume = {J101-D},
number = 1,
pages = {57--67},
year = 2018,
month = {jan},
}
[hayashi-ieicet201801]: as a page
Existing techniques for detecting code smells (indicators of source code problems) do not consider the current context, which renders them unsuitable for developers who have a specific context, such as modules within their focus. Consequently, the developers must spend time identifying relevant smells. We propose a technique to prioritize code smells using the developers' context. Explicit data of the context are obtained using a list of issues extracted from an issue tracking system. We applied impact analysis to the list of issues and used the results to specify the context-relevant smells. Results show that our approach can provide developers with a list of prioritized code smells related to their current context. We conducted several empirical studies to investigate the characteristics of our technique and factors that might affect the ranking quality. Additionally, we conducted a controlled experiment with professional developers to evaluate our technique. The results demonstrate the effectiveness of our technique.
@article{natthawute-jsep201806,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {Context-Based Approach to Prioritize Code Smells for Prefactoring},
journal = {Journal of Software: Evolution and Process},
volume = 30,
number = 6,
pages = {e1886:1--24},
year = 2018,
month = {jun},
}
[natthawute-jsep201806]: as a page
Utilizing software architecture patterns is important for reducing maintenance costs. However, maintaining code according to the constraints defined by the architecture patterns is time-consuming work. As described herein, we propose a technique to detect code fragments that are incompliant to the architecture as fine-grained architectural violations. For this technique, the dependence graph among code fragments extracted from the source code and the inference rules according to the architecture are the inputs. A set of candidate components to which a code fragment can be affiliated is attached to each node of the graph and is updated step-by-step. The inference rules express the components' responsibilities and dependency constraints. They remove candidate components of each node that do not satisfy the constraints from the current estimated state of the surrounding code fragment. If the inferred role of a code fragment does not include the component that the code fragment currently belongs to, then it is detected as a violation. We have implemented our technique for the Model-View-Controller for Web Application architecture pattern. By applying the technique to web applications implemented using Play Framework, we obtained accurate detection results. We also investigated how much does each inference rule contribute to the detection of violations.
@article{hayashi-ieicet201807,
author = {Shinpei Hayashi and Fumiki Minami and Motoshi Saeki},
title = {Detecting Architectural Violations Using Responsibility and Dependency Constraints of Components},
journal = {IEICE Transactions on Information and Systems},
volume = {E101-D},
number = 7,
pages = {1780--1789},
year = 2018,
month = {jul},
}
[hayashi-ieicet201807]: as a page
Recording code changes comes to be well recognized as an effective means for understanding the evolution of existing programs and making their future changes efficient. Although fine-grained textual changes of source code are worth leveraging in various situations, there is no satisfactory tool that records such changes. This paper proposes a yet another tool, called ChangeMacroRecorder, which automatically records all textual changes of source code while a programmer writes and modifies it on the Eclipse's Java editor. Its capability has been improved with respect to both the accuracy of its recording and the convenience for its use. Tool developers can easily and cheaply create their new applications that utilize recorded changes by embedding our proposed recording tool into them.
@inproceedings{maruyama-saner2018,
author = {Katsuhisa Maruyama and Shinpei Hayashi and Takayuki Omori},
title = {ChangeMacroRecorder: Recording Fine-Grained Textual Changes of Source Code},
booktitle = {Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering},
pages = {537--541},
year = 2018,
month = {mar},
}
[maruyama-saner2018]: as a page
Bug localization is a technique that has been proposed to support the process of identifying the locations of bugs specified in a bug report. A traditional approach such as information retrieval (IR)-based bug localization calculates the similarity between the bug description and the source code and suggests locations that are likely to contain the bug. However, while many approaches have been proposed to improve the accuracy, the likelihood of each module having a bug is often overlooked or they are treated equally, whereas this may not be the case. For example, modules having code smells have been found to be more prone to changes and faults. Therefore, in this paper, we explore a first step toward leveraging code smells to improve bug localization. By combining the code smell severity with the textual similarity from IR-based bug localization, we can identify the modules that are not only similar to the bug description but also have a higher likelihood of containing bugs. Our preliminary evaluation on four open source projects shows that our technique can improve the baseline approach by 142.25\% and 30.50\% on average for method and class levels, respectively.
@inproceedings{takahashi-a-at-icpc2018,
author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {A Preliminary Study on Using Code Smells to Improve Bug Localization},
booktitle = {Proceedings of the 26th IEEE/ACM International Conference on Program Comprehension},
pages = {324--327},
year = 2018,
month = {may},
}
[takahashi-a-at-icpc2018]: as a page
Developers often save multiple kinds of source code edits into a commit in a version control system, producing a tangled change, which is difficult to understand and revert. However, its separation using an existing sequence-based change representation is tough.We propose a new visualization technique to show the details of a tangled change and align its component edits in a tree structure for expressing multiple groups of changes. Our technique is combined with utilizing refactoring detection and change relevance calculation techniques for constructing the structural tree. Our combination allows us to divide the change into several associations. We have implemented a tool and conducted a controlled experiment with industrial developers to confirm its usefulness and efficiency. Results show that by using our tool with tree visualization, the subjects could understand and decompose tangled changes easier, faster, and higher accuracy than the baseline file list visualization.
@inproceedings{sarocha-compsac2018,
author = {Sarocha Sothornprapakorn and Shinpei Hayashi and Motoshi Saeki},
title = {Visualizing a Tangled Change for Supporting Its Decomposition and Commit Construction},
booktitle = {Proceedings of the 42nd IEEE Computer Software and Applications Conference},
pages = {74--79},
year = 2018,
month = {jul},
}
[sarocha-compsac2018]: as a page
識別子の名前変更は多く行われるリファクタリングであり,複数の識別子をツールで一括して正しく変更することは,開発者の負担軽減や変更漏れの防止へ繋がる.一括変更を精度良く行うためには,同概念を指すとして一括変更すべき識別子中の語を特定する必要がある.本稿では,識別子間のプログラム上での関係性に注目する.過去の変更履歴から同時名前変更された識別子間の関係性について定義を行い,その関係にある同時名前変更がどれだけ存在するのか調査を行った.また,既存の一括名前変更手法と組み合わせることで,それぞれの関係性がどの程度同時名前変更された識別子を優先して変更推薦できるか実験を行った.
@article{umekawa-sigss201801,
author = {梅川 尚孝 and 林 晋平 and 佐伯 元司},
title = {名前変更リファクタリングが行われた識別子間の関係性に関する実証的調査},
journal = {電子情報通信学会技術研究報告},
volume = 117,
number = 381,
pages = {1--6},
year = 2018,
month = {jan},
}
[umekawa-sigss201801]: as a page
形式概念分析を利用した動的機能捜索手法には,入力である「シナリオと利用された機能とのマッピング」の作成が容易ではないという問題点が存在する.この問題を解決するために,機能開始点を特定することで,機能とシナリオの対応関係を特定する手法を提案する.提案手法では,"実行シナリオに関する共通性"の変化から機能開始点を特定し,機能名推定の支援及びシナリオとの対応関係特定を行う.オープンソースソフトウェアを対象に提案手法の評価実験を行った.その結果,対象ソフトウェアのドメイン知識だけを前提とし,41%の機能とシナリオのマッピングを構成することができた.
@article{maaki-sigss201803,
author = {中野 真明貴 and 野田 訓広 and 小林 隆志 and 林 晋平},
title = {実行トレースの共通性分析に基づく機能開始点の特定},
journal = {電子情報通信学会技術研究報告},
volume = 117,
number = 477,
pages = {51--56},
year = 2018,
month = {mar},
}
[maaki-sigss201803]: as a page
大規模なソフトウェア開発では,ある特定のバグを解決するために修正すべきソースコード箇所を見つけるBug Localizationが必要である.情報検索に基づくBug Localization手法(IR手法)は,バグに関して記述されたバグレポートとソースコード内のモジュールとのテキスト類似度を計算し,これに基づき修正すべきモジュールを特定する.しかし,この手法は各モジュールのバグの出やすさの度合いを考慮していないため精度が低い.本論文では,ソースコード内のモジュールのバグの出やすさとして不吉な臭いを用い,これを既存のIR手法と組み合わせたBug Localization手法を提案する.提案手法では,不吉な臭いの深刻度と,ベクトル空間モデルに基づくテキスト類似度を統合した新しい評価値を定義している.これは深刻度の高い不吉な臭いとバグレポートとの高いテキスト類似性の両方を持つモジュールを上位に位置付け,バグを解決するために修正すべきモジュールを予測する.4つのOSSプロジェクトの過去のバグレポートを用いた評価では,いずれのプロジェクト,モジュール粒度においても提案手法の精度が既存のIR手法を上回り,また最大で269\%の向上がみられた.
@article{takahashi-a-at-sigse201803,
author = {高橋 碧 and セーリム ナッタウット and 林 晋平 and 佐伯 元司},
title = {情報検索に基づくBug Localizationへの不吉な臭いの深刻度の利用},
journal = {情報処理学会研究報告},
volume = {2018-SE-198},
number = 16,
pages = {1--8},
year = 2018,
month = {mar},
}
[takahashi-a-at-sigse201803]: as a page
本稿では,バージョン管理システムにおける開発者の部分コミット履歴の再構成を支援するツールを提案する.コミット間の依存関係をテキストレベルやビルドレベルで事前に解析し,コミットの取り込みの失敗を防ぐ.これにより開発者に高度なバージョン管理システムの知識や,試行錯誤を要求する再構成のプロセスの改善を試みる.
@inproceedings{rfunaki-ses2018,
author = {舟木 亮介 and 林 晋平 and 佐伯 元司},
title = {コミット間の依存関係を考慮した部分コミット履歴の再構成支援に向けて},
booktitle = {ソフトウェアエンジニアリングシンポジウム2018予稿集},
pages = {255--256},
year = 2018,
month = {sep},
}
[rfunaki-ses2018]: as a page
@proceedings{hayashi-maint2018,
title = {1st IEEE International Workshop on Mining and Analyzing Interaction Histories (MAINT 2018)},
year = 2018,
month = {mar},
}
[hayashi-maint2018]: as a page
@misc{wlan-dsbc2018,
author = {Lan Wang and Shinpei Hayashi and Motoshi Saeki},
title = {Automated Data Integration based on Conceptual Models and Its Application in Industry},
howpublished = {Data Science {\&} Blockchain Workshop},
year = 2018,
month = {oct},
}
[wlan-dsbc2018]: as a page
2014年秋から、株式会社スペースタイムではサイエンス教室とプログラミング教室とを合わせた小中学生向け教室を、「ラッコラ」と名付けて運営している。ラッコラの特徴は、同一のテーマをサイエンス編とプログラミング編とで相互補完的に扱う点、プログラミング編において、プログラミング言語としてJavaScriptを採用し、独自の開発環境を使用する点にある。本稿では、ラッコラの概略とこれまでの歩みについて説明する。
@misc{takty-axies2018,
author = {柳田 拓人 and 中村 景子 and 林 晋平},
title = {サイエンス&プログラミング教室の試みと実践},
howpublished = {大学ICT推進協議会2018年度全国大会},
year = 2018,
month = {nov},
}
[takty-axies2018]: as a page
複数のレイヤで構成されたソフトウェアでは,レイヤ間に分散したプログラム要素が協調動作して1つの機能を実現するために,機能とプログラム要素群を対応付ける作業である機能捜索が難しい.そこで,本論文では機能とレイヤ間に分散したプログラム要素群の対応関係を半自動的に抽出する機能捜索手法を提案する.提案手法ではレイヤごとの実行プロファイルを併合して形式概念分析への入力として用いることにより,異なるレイヤに属するプログラム要素を形式概念としてグループ化し,機能の集合と対応付ける.たとえば,提案手法をWebアプリケーションに対して適用することにより,アプリケーション層に属するモジュールだけでなく,プレゼンテーション層やデータ層に属するWebページやデータベースのテーブルアクセスといった要素を同時に機能に紐付けられる.Webアプリケーションの例題に提案手法を適用し,3つのレイヤに分散したプログラム要素と機能の対応関係を抽出した事例を示すことにより,手法の実現可能性を示すとともに,現実的なプログラム理解の支援に向けた応用可能性について議論する.
@inproceedings{kazato-ses2018,
author = {風戸 広史 and 林 晋平 and 大島 剛志 and 小林 隆志 and 夏川 勝行 and 星野 隆 and 佐伯 元司},
title = {多層システムに対する横断的な機能捜索},
booktitle = {ソフトウェアエンジニアリングシンポジウム2018予稿集},
pages = 14,
year = 2018,
month = {sep},
}
[kazato-ses2018]: as a page
@misc{takahashi-a-at-ses2018,
author = {高橋 碧 and セーリム ナッタウット and 林 晋平 and 佐伯 元司},
title = {情報検索に基づくBug Localizationに不吉な臭いが与える影響の調査},
howpublished = {ソフトウェアエンジニアリングシンポジウム2018},
year = 2018,
month = {sep},
}
[takahashi-a-at-ses2018]: as a page
@article{ishikawa-jssst201711,
author = {石川 冬樹 and 馬谷 誠二 and 小宮 常康 and 林 晋平 and 横山 大作},
title = {特集「ソフトウェア論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 34,
number = 4,
pages = 2,
year = 2017,
month = {nov},
}
[ishikawa-jssst201711]: as a page
複数のレイヤで構成されたソフトウェアでは,レイヤ間に分散したプログラム要素が協調動作して1つの機能を実現するために,機能とプログラム要素群を対応付ける作業である機能捜索が難しい.そこで,本論文では機能とレイヤ間に分散したプログラム要素群の対応関係を半自動的に抽出する機能捜索手法を提案する.提案手法ではレイヤごとの実行プロファイルを併合して形式概念分析への入力として用いることにより,異なるレイヤに属するプログラム要素を形式概念としてグループ化し,機能の集合と対応付ける.たとえば,提案手法をWebアプリケーションに対して適用することにより,アプリケーション層に属するモジュールだけでなく,プレゼンテーション層やデータ層に属するWebページやデータベースのテーブルアクセスといった要素を同時に機能に紐付けられる.Webアプリケーションの例題に提案手法を適用し,3つのレイヤに分散したプログラム要素と機能の対応関係を抽出した事例を示すことにより,手法の実現可能性を示すとともに,現実的なプログラム理解の支援に向けた応用可能性について議論する.
In multi-layer systems such as web applications, locating features is a challenging problem because each feature is often realized through a collaboration of program elements belonging to different layers. This paper proposes a semi-automatic technique to extract correspondence between features and program elements among layers, by merging execution traces of every layer to feed into formal concept analysis. By applying this technique to a web application, not only modules in the application layer but also web pages in the presentation layer and table accesses in the data layer can be associated with features at once. To show the feasibility of our technique, we applied it to a web application which conforms to the typical three-layer architecture of Java EE and discuss its applicability to other layer systems in the real world.
@article{kazato-ipsjj201704,
author = {風戸 広史 and 林 晋平 and 大島 剛志 and 小林 隆志 and 夏川 勝行 and 星野 隆 and 佐伯 元司},
title = {多層システムに対する横断的な機能捜索},
journal = {情報処理学会論文誌},
volume = 58,
number = 4,
pages = {885--897},
year = 2017,
month = {apr},
}
[kazato-ipsjj201704]: as a page
@article{nakagawa-jssst201708,
author = {中川 博之 and 小林 努 and 林 晋平 and 吉岡 信和 and 鵜林 尚靖},
title = {ER 2016参加報告},
journal = {コンピュータソフトウェア},
volume = 34,
number = 3,
pages = {75--80},
year = 2017,
month = {aug},
}
[nakagawa-jssst201708]: as a page
@article{hayashi-jssst201708,
author = {林 晋平 and 川端 英之},
title = {特集「サーベイ論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 34,
number = 3,
pages = 2,
year = 2017,
month = {aug},
}
[hayashi-jssst201708]: as a page
本論文ではクラスへの責務割当てをファジィ制約充足問題として定式化することにより自動化を図る手法を提案する.責務とは各クラスのインスタンスが果たすべき役割を指し,それらをクラスへ適切に割り当てることによって,高品質な設計が実現される.割当てに際しては,疎結合かつ高凝集な割当てが望ましいなど,様々な観点を考慮することが必要となる.しかし,そのような観点の間にはトレード・オフがあるため,現実的には条件を適度に満たす割当てが求められ,計算機による支援が重要である.そこで,様々な条件をファジィ制約として表現し,責務割当てをファジィ制約充足問題として定式化する.これによって,既存の汎用的なアルゴリズムを適用できるようになり,解としての責務割当ての導出が可能となる.例題に対して解を導出した結果を示す.
The authors formulate the class responsibility assignment (CRA) problem as the fuzzy constraint satisfaction problem (FCSP) to automate CRA, and show the results of automatic assignments of examples. Responsibilities are contracts or obligations of objects that they should assume; by aligning them to classes appropriately, designs of high quality realize. Typical aspects of a desirable design are having a low coupling between highly cohesive classes. However, because of a trade-off among such aspects, solutions that satisfy the conditions moderately are desired, and computer assistance is needed. The authors represent the conditions of such aspects as fuzzy constraints, and formulate CRA as FCSP. That enables us to apply common algorithms that solve FCSP to the problem, and to derive solutions representing a CRA.
@article{hayashi-ipsjj201704,
author = {林 晋平 and 柳田 拓人 and 佐伯 元司 and 三村 秀典},
title = {クラス責務割当てのファジィ制約充足問題としての定式化},
journal = {情報処理学会論文誌},
volume = 58,
number = 4,
pages = {795--806},
year = 2017,
month = {apr},
}
[hayashi-ipsjj201704]: as a page
Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Both severity and importance of identified refactoring opportunities (e.g. code smells) are difficult to estimate. In fact, due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. In addition, some code fragments can contain severe quality issues but they are not playing an important role in the system. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between three objectives to maximize: quality improvements, severity and importance of refactoring opportunities to be fixed. We evaluated our approach using 8 open source systems and one industrial project, and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in all the experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches, better prioritization of refactoring opportunities and to carry an acceptable robustness price.
@article{mkaouer-emse201704,
author = {Mohamed Wiem Mkaouer and Marouane Kessentini and Mel {\'{O}} Cinn{\'{e}}ide and Shinpei Hayashi and Kalyanmoy Deb},
title = {A Robust Multi-Objective Approach to Balance Severity and Importance of Refactoring Opportunities},
journal = {Empirical Software Engineering},
volume = 22,
number = 2,
pages = {894--927},
year = 2017,
month = {apr},
}
[mkaouer-emse201704]: as a page
To develop with lower costs information systems that do not violate regulations, it is necessary to elicit requirements compliant to the regulations. Automated supports allow us to avoid missing requirements necessary to comply with regulations and to exclude functional requirements against the regulations. In this paper, we propose a technique to detect goals relevant to regulations in a goal model and to add goals so that the resulting goal model can be compliant to the regulations. In this approach, we obtain the goals relevant to regulations by semantically matching goal descriptions to regulatory statements. We use Case Grammar approach to deal with the meaning of goal descriptions and regulatory statements, i.e., both are transformed to case frames as their semantic representations, and we check if their case frames can be unified. After detecting the relevant goals, based on the modality of matched regulatory statements, new goals to realize the compliance to regulatory statements are added to the goal model. We made case studies and had a result that 93\% of regulatory violations could be corrected.
@inproceedings{negishi-cbi2017,
author = {Yu Negishi and Shinpei Hayashi and Motoshi Saeki},
title = {Establishing Regulatory Compliance in Goal-Oriented Requirements Analysis},
booktitle = {Proceedings of the 19th IEEE Conference on Business Informatics},
pages = {434--443},
year = 2017,
month = {jul},
}
[negishi-cbi2017]: as a page
Failures of precondition checking when attempting to apply automated refactorings often discourage programmers from attempting to use these refactorings in the future. To alleviate this situation, the postponement of the failed refactoring instead its cancellation is beneficial. This poster paper proposes a new concept of postponable refactoring and a prototype tool that implements postponable Extract Method as an Eclipse plug-in. We believe that this refactoring tool inspires a new field of reconciliation automated and manual refactoring.
@inproceedings{maruyama-icse2017,
author = {Katsuhisa Maruyama and Shinpei Hayashi},
title = {A Tool Supporting Postponable Refactoring},
booktitle = {Proceedings of the 39th International Conference on Software Engineering},
pages = {133--135},
year = 2017,
month = {may},
}
[maruyama-icse2017]: as a page
Because numerous code smells are revealed by code smell detectors, many attempts have been undertaken to mitigate related problems by prioritizing and filtering code smells. We earlier proposed a technique to prioritize code smells by leveraging the context of the developers, i.e., the modules that the developers plan to implement. Our empirical studies revealed that the results of code smells prioritized using our technique are useful to support developers' implementation on the modules they intend to change. Nonetheless, in software change processes, developers often navigate through many modules and refer to them before making actual changes. Such modules are important when considering the developers' context. Therefore, it is essential to ascertain whether our technique can also support developers on modules to which they are going to refer to make changes. We conducted an empirical study of an open source project adopting tools for recording developers' interaction history. Our results demonstrate that the code smells prioritized using our approach can also be used to support developers for modules to which developers are going to refer, irrespective of the need for modification.
@inproceedings{natthawute-mtd2017,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {Revisiting Context-Based Code Smells Prioritization: On Supporting Referred Context},
booktitle = {Proceedings of the XP 2017 Scientific Workshops},
pages = {1--5},
year = 2017,
month = {may},
}
[natthawute-mtd2017]: as a page
In the world of the Internet of Things, heterogeneous systems and devices need to be connected. A key issue for systems and devices is data interoperability such as automatic data exchange and interpretation. A well-known approach to solve the interoperability problem is building a conceptual model (CM). Regarding CM in industrial domains, there are often a large number of entities defined in one CM. How data interoperability with such a large-scale CM can be supported is a critical issue when applying CM into industrial domains. In this paper, evolved from our previous work, a meta-model equipped with new concepts of “PropertyRelationship” and “Category” is proposed, and a tool called FSCM supporting the automatic generation of property relationships and categories is developed. A case study in an industrial domain shows that the proposed approach effectively improves the data interoperability of large-scale CMs.
@inproceedings{wlan-er2017,
author = {Lan Wang and Shinpei Hayashi and Motoshi Saeki},
title = {An Improvement on Data Interoperability with Large-Scale Conceptual Model and Its Application in Industry},
booktitle = {Conceptual Modeling: Research in Progress: Companion Proceedings of the 36th International Conference on Conceptual Modelling},
pages = {249--262},
year = 2017,
month = {nov},
}
[wlan-er2017]: as a page
Goal refinement is a crucial step in goal-oriented requirements analysis to create a goal model of high quality. Poor goal refinement leads to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of produced goal models. This paper proposes a technique to automate detecting \textit{bad smells} of goal refinement, symptoms of poor goal refinement. Based on the classification of poor refinement, we defined four types of bad smells of goal refinement and developed two types of measures to detect them: measures on the graph structure of a goal model and semantic similarity of goal descriptions. We have implemented a support tool to detect bad smells and assessed its usefulness by an experiment.
@inproceedings{k_asano-mreba2017,
author = {Keisuke Asano and Shinpei Hayashi and Motoshi Saeki},
title = {Detecting Bad Smells of Refinement in Goal-Oriented Requirements Analysis},
booktitle = {Proceedings of the 4th International Workshop on Conceptual Modeling in Requirements and Business Analysis},
pages = {122--132},
year = 2017,
month = {nov},
}
[k_asano-mreba2017]: as a page
Goal-oriented requirements analysis (GORA) has been growing in the area of requirement engineering. It is one of the approaches that elicits and analyzes stakeholders’ requirements as goals to be achieved, and develops an AND-OR graph, called a goal graph, as a result of requirements elicitation. However, although it is important to involve stakeholders’ ideas and viewpoints during requirements elicitation, GORA still has a problem that their processes lack the deeper participation of stakeholders. Regarding stakeholders’ participation, creativity techniques have also become popular in requirements engineering. They aim to create novel and appropriate requirements by involving stakeholders. One of these techniques, the KJ-method is a method which organizes and associates novel ideas generated by Brainstorming. In this paper, we present an approach to support stakeholders’ participation during GORA processes by transforming an affinity diagrams of the KJ-method, into a goal graph, including transformation guidelines, and also apply our approach to an example.
@inproceedings{kinoshita-mreba2017,
author = {Tomoo Kinoshita and Shinpei Hayashi and Motoshi Saeki},
title = {Goal-Oriented Requirements Analysis Meets a Creativity Technique},
booktitle = {Proceedings of the 4th International Workshop on Conceptual Modeling in Requirements and Business Analysis},
pages = {101--110},
year = 2017,
month = {nov},
}
[kinoshita-mreba2017]: as a page
Code smells are considered to be indicators of design flaws or problems in source code. Various tools and techniques have been proposed for detecting code smells. The number of code smells detected by these tools is generally large, so approaches have also been developed for prioritizing and filtering code smells. However, the lack of empirical data regarding how developers select and prioritize code smells hinders improvements to these approaches. In this study, we investigated professional developers to determine the factors they use for selecting and prioritizing code smells. We found that \textit{Task relevance} and \textit{Smell severity} were most commonly considered during code smell selection, while \textit{Module importance} and \textit{Task relevance} were employed most often for code smell prioritization. These results may facilitate further research into code smell detection, prioritization, and filtration to better focus on the actual needs of developers.
@inproceedings{natthawute-icsme2017,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {How Do Developers Select and Prioritize Code Smells? A Preliminary Study},
booktitle = {Proceedings of the 33rd IEEE International Conference on Software Maintenance and Evolution},
pages = {484--488},
year = 2017,
month = {sep},
}
[natthawute-icsme2017]: as a page
Dynamic feature location techniques (DFLTs), which use execution profiles of scenarios that trigger a feature, are a promising approach to locating features in the source code. A sufficient set of scenarios is key to obtaining highly accurate results; however, its preparation is laborious and difficult in practice. In most cases, a scenario exercises not only the desired feature but also other features. We focus on the relationship between a module and multiple features that can be calculated with no extra scenarios, to improve the accuracy of locating the desired feature in the source code. In this paper, we propose a DFLT using the odds ratios of the multiple relationships between modules and features. We use the similarity coefficient, which is used in fault localization techniques, as a relationship. Our DFLT better orders shared modules compared with an existing DFLT. To reduce developer costs in our DFLT, we also propose a filtering technique that uses formal concept analysis. We evaluate our DFLT on the features of an open source software project with respect to developer costs and show that our DFLT outperforms the existing approach; the average cost of our DFLT is almost half that of the state-of-the-art DFLT.
@inproceedings{maaki-compsac2017,
author = {Maaki Nakano and Kunihiro Noda and Shinpei Hayashi and Takashi Kobayashi},
title = {Mediating Turf Battles! Prioritizing Shared Modules in Locating Multiple Features},
booktitle = {Proceedings of the 41st IEEE Computer Society Signature Conference on Computers, Software and Applications},
pages = {363--368},
year = 2017,
month = {jul},
}
[maaki-compsac2017]: as a page
Utilizing software architecture patterns is important for reducing maintenance costs. However, maintaining code according to the constraints defined by the architecture patterns is time-consuming work. As described herein, we propose a technique to detect code fragments that are incompliant to the architecture as fine-grained architectural violations. For this technique, the dependence graph among code fragments extracted from the source code and the inference rules according to the architecture are the inputs. A set of candidate components to which a code fragment can be affiliated is attached to each node of the graph and is updated step-by-step. The inference rules express the components' responsibilities and dependency constraints. They remove candidate components of each node that do not satisfy the constraints from the current estimated state of the surrounding code fragment. If the current result does not include the current component, then it is detected as a violation. By defining inference rules for MVC2 architecture and applying the technique to web applications using Play Framework, we obtained accurate detection results.
@inproceedings{hayashi-icsoft2017,
author = {Shinpei Hayashi and Fumiki Minami and Motoshi Saeki},
title = {Inference-Based Detection of Architectural Violations in MVC2},
booktitle = {Proceedings of the 12th International Conference on Software Technologies},
pages = {394--401},
year = 2017,
month = {jul},
}
[hayashi-icsoft2017]: as a page
Behavior preservation often bothers programmers in refactoring. This poster paper proposes a new approach that tames the behavior preservation by introducing the concept of a frame. A frame in refactoring defines stakeholder's individual concerns about the refactored code. Frame-based refactoring preserves the observable behavior within a particular frame. Therefore, it helps programmers distinguish the behavioral changes that they should observe from those that they can ignore.
@inproceedings{maruyama-saner2017,
author = {Katsuhisa Maruyama and Shinpei Hayashi and Norihiro Yoshida and Eunjong Choi},
title = {Frame-Based Behavior Preservation in Refactoring},
booktitle = {Proceedings of the 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering},
pages = {573--574},
year = 2017,
month = {feb},
}
[maruyama-saner2017]: as a page
[Context & motivation] To develop information systems providing high business value, we should clarify As-is business processes and information systems supporting them, identify the problems hidden in them, and develop To-be information systems so that the identified problems can be solved. [Question/problem] In this development, we need a technique to support the identification of the problems, which can be seamlessly connected to the modeling techniques. [Principal ideas/results] In this paper, to define metrics to extract problems of the As-is system, following the domains specific to it, we propose the combination of Goal-Question-Metric (GQM) with existing requirements analysis techniques. Furthermore, we integrate goal-oriented requirements analysis (GORA) with problem frames approach and use case modeling to define the metrics of measuring the problematic efforts of human actors in the As-is models. This paper includes a case study of a reporting operation process at a brokerage office to check the feasibility of our approach. [Contribution] Our contribution is the proposal of using of GQM to identify the problems of an As-is model specified with the combination of GORA, use case modeling, and problem frames.
@inproceedings{ito-refsq2017,
author = {Shoichiro Ito and Shinpei Hayashi and Motoshi Saeki},
title = {How Can You Improve Your As-is Models? Requirements Analysis Methods Meet GQM},
booktitle = {Proceedings of the 23rd Working Conference on Requirements Engineering: Foundation for Software Quality},
pages = {95--111},
year = 2017,
month = {feb},
}
[ito-refsq2017]: as a page
問題領域固有の知識(ドメイン知識) の不足による要求仕様の誤りや欠落は開発の手戻りや,実現するべきシステムの機能の欠落や,不要な機能の実装の原因となる.本論文では,ゴール指向要求分析法において,過去の事例等から得られたドメイン知識を利用してゴールの詳細化を支援する手法を提案する.この手法では単語概念と格フレームを表す単語の組の概念,そして概念間の関係により定義されたドメインオントロジを利用する.オントロジ上に定義された格フレームとゴール記述のマッチングを行い,マッチした格フレームと関係を持つ概念を注目しているゴールに必要な概念と考え,この概念を持つゴールをサブゴールとして推薦する.既存のゴール指向要求分析ツールに記述解析機構,マッチング機構,推論機構を組み込み,事例評価を行い有用な推薦が行われることを示した.
@article{kazuaki-sigse201703,
author = {平澤 一晃 and 林 晋平 and 佐伯 元司},
title = {ゴール指向要求分析における語彙間の格関係によるゴール推薦},
journal = {情報処理学会研究報告},
volume = {2017-SE-195},
number = 11,
pages = {1--8},
year = 2017,
month = {mar},
}
[kazuaki-sigse201703]: as a page
プログラム中の識別子は適切に名付けられるとプログラム理解に貢献するとされている.開発中に識別子名は様々な意図によって変更が加えられるが,大規模プロジェクトでは大きな手間となる.これに対して識別子名の一括変更を行う手法が提案されているが,検出精度に課題がある.本論文では識別子名一括変更を行う手法に対して,推測する変更意図の詳細化や識別子名の安定性を考慮等の拡張を行い,推薦精度の向上を図った.またツールによる自動化を行い,オープンソースソフトウェアで実際に行われた識別子名変更を用いて従来手法との比較を行った.
@article{umekawa-sigse201703,
author = {梅川 尚孝 and 林 晋平 and 佐伯 元司},
title = {識別子名一括変更支援における推薦精度の向上に向けて},
journal = {情報処理学会研究報告},
volume = {2017-SE-195},
number = 1,
pages = {1--8},
year = 2017,
month = {mar},
}
[umekawa-sigse201703]: as a page
ソフトウェアの開発ではイシュー管理システム(ITS)が広く利用されている.ITS上のイシュー内部のコメントによって構成される議論を把握することで,そのソフトウェアの問題点や今後の実装方針を知ることができるが,複雑な議論の場合その流れを把握することが難しいという課題がある.本論文ではイシュー上の議論についてコメントの記述内容からコメント間の関連性を抽出し,議論の流れをグラフとして可視化するための手法を提案する.提案手法はコメントの記述内容が議論参加者の名前を含むか,以前のコメントの内容を引用しているか, 議論参加者自身の以前のコメントの内容を補足するかの3つの観点からコメント間の関係を抽出し,これを用いて議論構造グラフを構築する.議論構造グラフがあることで,議論全体の流れを視覚的に把握することが可能になる.また提案手法に基づき構築したグラフと議論の内容を併せて表示することで読者の議論内容把握を支援するツールを実装した.支援ツールは提案手法により構築したグラフを修正する機能と,理解した内容を記録する機能を有し, 読者が議論の理解過程を保存することを可能にする.
@article{houchi-sigss201703,
author = {大内 裕晃 and 林 晋平 and 善明 晃由 and 佐伯 元司},
title = {イシュー上の議論構造の可視化とその理解支援ツール},
journal = {電子情報通信学会技術研究報告},
volume = 116,
number = 512,
pages = {49--54},
year = 2017,
month = {mar},
}
[houchi-sigss201703]: as a page
要求獲得を支援する手法としてゴール指向要求分析法がある.ゴール指向要求分析法ではゴールグラフを利用し,システムにより達成したいゴールを繰り返し詳細化することでシステムが持つべき機能を特定する.ゴールグラフ中に不適切なゴール詳細化を行っている箇所があることは要求の見落としや不要な要求の獲得の原因となる.本論文では,不適切なゴール詳細化を自動的に検出することを目的とする.まず,不適切なゴール詳細化の具体例を収集し,これらを分析して不適切な詳細化の分類を行った.この分析をもとにゴール記述の意味とゴールグラフの構造の2つの観点から検出規則の定義を行った.ゴール記述の意味の観点では,ゴール記述の類似度に着目する.ゴールグラフの構造の観点では,兄弟枝の数および葉ゴールの相対的な深さに着目する.検出規則の有用性を評価するための被験者実験から,検出規則により不適切な詳細化を行っている箇所全部のうち60\%以上を検出でき,被験者の見落としていた不適切な詳細化も指摘できた.
@article{k_asano-sigss201703,
author = {淺野 圭亮 and 林 晋平 and 佐伯 元司},
title = {ゴール指向要求分析法における不適切なゴール詳細化の検出},
journal = {電子情報通信学会技術研究報告},
volume = 116,
number = 512,
pages = {127--132},
year = 2017,
month = {mar},
}
[k_asano-sigss201703]: as a page
プログラムを変更すると複数の依存箇所に変更が伝播するが,大規模なソフトウェア開発では開発者が必要な変更箇所を全て特定することは困難な作業となる.前処理命令を埋め込むことで実行環境の差異の吸収が可能なソフトウェアでは前処理命令が散在するため,必要な変更箇所の特定はより困難となる.本稿では,前処理命令の影響を受ける箇所に対する変更を考慮した共変更ルールの抽出手法を提案する.前処理命令の分岐条件と変更の関係を解析することで,改版履歴中の変更に属性を付与し,ルールを生成する.3つの中規模OSSを用いた評価実験の結果,提案手法によって得られた共変更ルールはリフト値の観点から質が高いことを明らかにした.また,既存手法と比較して推薦精度を表すMRRの最大値が向上することを示した.
@article{tmori-sigss201703,
author = {森 達也 and 小林 隆志 and 林 晋平 and 渥美 紀寿},
title = {前処理命令による可変点を考慮した共変更ルール抽出},
journal = {電子情報通信学会技術研究報告},
volume = 116,
number = 512,
pages = {61--66},
year = 2017,
month = {mar},
}
[tmori-sigss201703]: as a page
@misc{maaki-wws2017,
author = {中野 真明貴 and 林 晋平 and 小林 隆志},
title = {動的機能捜索に基づく機能間関係特定に向けて},
howpublished = {ウィンターワークショップ2017・イン・飛騨高山},
year = 2017,
month = {jan},
}
[maaki-wws2017]: as a page
@misc{sarocha-ses2017,
author = {Sarocha Sothornprapakorn and Shinpei Hayashi and Motoshi Saeki},
title = {Visualizing a Tangled Change for Supporting Its Decomposition and Commit Construction},
howpublished = {ソフトウェアエンジニアリングシンポジウム2017},
year = 2017,
month = {aug},
}
[sarocha-ses2017]: as a page
@misc{hoshinot-ses2017,
author = {星野 友宏 and 林 晋平 and 佐伯 元司},
title = {探索に基づくリファクタリング補完の実現に向けて},
howpublished = {ソフトウェアエンジニアリングシンポジウム2017},
year = 2017,
month = {aug},
}
[hoshinot-ses2017]: as a page
Change-aware development environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, since they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed, they often eliminate several code changes of no interest by manually skipping them in replaying. This skipping action is an obstacle that makes many programmers hesitate when they use existing replaying tools. This paper proposes a slicing mechanism that automatically removes manually skipped code changes from the whole history of past code changes and extracts only those necessary to build a particular class member of a Java program. In this mechanism, fine-grained code changes are represented by edit operations recorded on the source code of a program and dependencies among edit operations are formalized. The paper also presents a running tool that slices the operation history and replays its resulting slices. With this tool, programmers can avoid replaying nonessential edit operations for the construction of class members that they want to understand. Experimental results show that the tool offered improvements over conventional replaying tools with respect to the reduction of the number of edit operations needed to be examined and over history filtering tools with respect to the accuracy of edit operations to be replayed.
@article{maruyama-ieicet201603,
author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi},
title = {Slicing Fine-Grained Code Change History},
journal = {IEICE Transactions on Information and Systems},
volume = {E99-D},
number = 3,
pages = {671--687},
year = 2016,
month = {mar},
}
[maruyama-ieicet201603]: as a page
要求獲得で用いるシソーラスの構築手法とその支援ツールを提案する.本論文で述べる手法は,(1) 要求分析者が技術文書からシソーラスに登録すべき機能の候補を抽出し,(2) ドメインエキスパートが候補を吟味することで,これらの候補からシソーラスに登録する機能と機能に付随する非機能要素を決定するという2段階からなる.ツールが支援するのは (1) である.本手法は,技術文書から (a) 機能を正しく抽出できる,(b) 機能を漏れなく抽出できる,という特徴を満たす必要がある.事例研究において本手法を適用したところ,本手法がこれらの特徴を満たし,有用であることを確認した.
We propose a method of developing a thesaurus for requirements elicitation and its supporting tool. This proposed method consists of two parts - (1) elicitation of candidates of functional requirements to be registered in the thesaurus from technical documents and (2) registration of functional requirements with associated non-functional factors in the thesaurus from these candidates under the direction of domain experts. Our tool supports the first part. This method should satisfy the following two characteristics - (a) extracted functions are correct and (b) any analyst can extract all indispensable functions from technical documents. We show the above two characteristics through case studies and confirm the usability and effectiveness of the proposed method.
@article{jkato-ipsjj201607,
author = {加藤 潤三 and 佐伯 元司 and 大西 淳 and 海谷 治彦 and 林 晋平 and 山本 修一郎},
title = {要求獲得のためシソーラス構築支援},
journal = {情報処理学会論文誌},
volume = 57,
number = 7,
pages = {1576--1589},
year = 2016,
month = {jul},
}
[jkato-ipsjj201607]: as a page
@article{kawabata-jssst201605,
author = {川端 英之 and 林 晋平},
title = {特集「サーベイ論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 33,
number = 2,
pages = 2,
year = 2016,
month = {may},
}
[kawabata-jssst201605]: as a page
@article{ishikawa-jssst201611,
author = {石川 冬樹 and 馬谷 誠二 and 小宮 常康 and 林 晋平 and 細部 博史 and 横山 大作},
title = {特集「ソフトウェア論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 33,
number = 4,
pages = 3,
year = 2016,
month = {nov},
}
[ishikawa-jssst201611]: as a page
@misc{wlan-er2016,
author = {Lan Wang and Shinpei Hayashi},
title = {How to Keep System Consistency via Meta-Model-Based Traceability Rules?},
howpublished = {The 35th International Conference on Conceptual Modeling},
year = 2016,
month = {nov},
}
[wlan-er2016]: as a page
@misc{kinoshita-er2016,
author = {Tomoo Kinoshita and Shinpei Hayashi},
title = {How Do We Use Goal-Oriented Requirements Analysis in Interviews with Stakeholders?: An Approach to Transforming Affinity Diagrams into Goal Graphs},
howpublished = {The 35th International Conference on Conceptual Modeling},
year = 2016,
month = {nov},
}
[kinoshita-er2016]: as a page
Feature location (FL) is an important activity for finding correspondence between software features and modules in source code. Although dynamic FL techniques are effective, the quality of their results depends on analysts to prepare sufficient scenarios for exercising the features. In this paper, we propose a technique for guiding identification of missing scenarios using the prior FL result. After applying FL, unexplored call dependencies are extracted by comparing the results of static and dynamic analyses, and analysts are advised to investigate them for finding missing scenarios. We propose several metrics that measure the potential impact of unexplored dependencies to help analysts sort out them. Through a preliminary evaluation using an example web application, we showed our technique was effective for recommending the clues to find missing scenarios.
@inproceedings{hayashi-apsec2016,
author = {Shinpei Hayashi and Hiroshi Kazato and Takashi Kobayashi and Tsuyoshi Oshima and Katsuyuki Natsukawa and Takashi Hoshino and Motoshi Saeki},
title = {Guiding Identification of Missing Scenarios for Dynamic Feature Location},
booktitle = {Proceedings of the 23rd Asia-Pacific Software Engineering Conference},
pages = {393--396},
year = 2016,
month = {dec},
}
[hayashi-apsec2016]: as a page
A socio-technical system (STS) consists of many different actors such as people, organizations, software applications and infrastructures. We call actors except both people and organizations machines. Machines should be carefully introduced into the STS because the machines are beneficial to some people or organization but harmful to others. We thus propose a goal-oriented requirements modelling language called GDMA based on i* so that machines with the following characteristics can be systematically specified. First, machines make the goals of each people be achieved more and better than ever. Second, machines make people achieve goals fewer and easier than ever. We also propose analysis techniques of GDMA to judge whether or not the introduction of machines are appropriate or not. Several machines are introduced into an as-is model of GDMA locally with the help of model transformation techniques. Then, such an introduction is evaluated globally on the basis of metrics derived from the model structure. We confirmed that GDMA could evaluate the successful and failure of existing projects.
@inproceedings{kaiya-somet2016,
author = {Haruhiko Kaiya and Shinpei Ogata and Shinpei Hayashi and Motoshi Saeki},
title = {Early Requirements Analysis for a Socio-Technical System based on Goal Dependencies},
booktitle = {Proceedings of the 15th International Conference on Intelligent Software Methodologies, Tools and Techniques},
pages = {125--138},
year = 2016,
month = {sep},
}
[kaiya-somet2016]: as a page
@misc{k_asano-er2016,
author = {Keisuke Asano and Shinpei Hayashi},
title = {Toward Detecting Inappropriate Goal Refinements in a Goal Model},
howpublished = {The 35th International Conference on Conceptual Modeling},
year = 2016,
month = {nov},
}
[k_asano-er2016]: as a page
To find opportunities for applying prefactoring, several techniques for detecting bad smells in source code have been proposed. Existing smell detectors are often unsuitable for developers who have a specific context because these detectors do not consider their current context and output the results that are mixed with both smells that are and are not related to such context. Consequently, the developers must spend a considerable amount of time identifying relevant smells. As described in this paper, we propose a technique to prioritize bad code smells using developers' context. The explicit data of the context are obtained using a list of issues extracted from an issue tracking system. We applied impact analysis to the list of issues and used the results to specify which smells are associated with the context. Consequently, our approach can provide developers with a list of prioritized bad code smells related to their current context. Several evaluations using open source projects demonstrate the effectiveness of our technique.
@inproceedings{natthawute-icpc2016,
author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
title = {Context-Based Code Smells Prioritization for Prefactoring},
booktitle = {Proceedings of the 24th International Conference on Program Comprehension},
pages = {1--10},
year = 2016,
month = {may},
}
[natthawute-icpc2016]: as a page
プログラム全体で識別子の種類の表記方法とプログラム中の概念に対する表現を統一するためには,識別子の命名規約や使用する単語を変更する際に複数の識別子名を一括して変更する必要がある.しかし,既存の手法やツールで変更すべき全ての識別子を名前変更することはできず,開発者自身による特定では漏れが生じる.本論文では,ある名前変更操作と一括して行うべき他の識別子に対する名前変更操作を推薦する手法を提案する.実際のプログラムの改版履歴を調査したところ,型名や記号等を用いた識別子の種類の表記方法や,概念に対して使用される単語を統一するための複数の識別子に対する同時名前変更操作が頻繁に行われていた.そこで,提案手法では開発者による1 つの名前変更操作を受け取り,その操作における命名規約や単語の変更を検出する.検出結果に基づき,同一の命名規約で表されるべき同種の識別子及び同じ単語を使用している識別子をプログラム中から探索し,入力の名前変更操作と同様の変更を行って得られた新しい名前を推薦する.識別子名の一括変更操作の実例を再現できるかを確認したところ,平均再現率0.75で実際の名前変更履歴中の同時変更を推薦できた.
@article{hkomata-sigse201603,
author = {小俣 仁美 and 林 晋平 and 佐伯 元司},
title = {命名方法の関連性に基づく識別子名の一括変更支援},
journal = {情報処理学会研究報告},
volume = {2016-SE-191},
number = 23,
pages = {1--8},
year = 2016,
month = {mar},
}
[hkomata-sigse201603]: as a page
法律との整合性を保ったシステムを開発するためには,要求獲得段階での法律を遵守する機能の獲得が必要である.要求分析者の見落としを防ぎ,法律に違反する機能を入れないためには知識と結びついた計算機による支援が有効である.本論文では,ゴール指向要求分析法において法律文に関連するゴールを自動的に特定し,法律を遵守するためのゴールを挿入する手法を提案する.ゴール記述,法律文の意味表現である格フレーム同士をマッチングすることで法律文に関連するゴールを得る.その後,法律文の様相に応じて法律を遵守するためのゴールを挿入する.事例に対して提案手法を適用し,93%の違反修正で効果を示すなどの結果を得た.
@article{negishi-sigse201603,
author = {根岸 由 and 林 晋平 and 佐伯 元司},
title = {法律に適合した要求獲得のためのゴールモデル作成支援},
journal = {情報処理学会研究報告},
volume = {2016-SE-191},
number = 22,
pages = {1--8},
year = 2016,
month = {mar},
}
[negishi-sigse201603]: as a page
形式概念分析を利用した動的機能捜索では,機能を実現する実装箇所は概念束内の複数の概念内に分散して現れる.このため捜索時には複数の概念を探索する必要がある.本稿ではこの探索効率の向上を目的として,機能との関連度に着目した概念束の探索戦略を提案する.機能との関連度を表す指標としては,欠陥箇所特定技法で利用される類似係数を用いる.代表的なオープンソースソフトウェアの複数の機能に対し提案手法による機能捜索を行い,既存手法との有用性の比較を行った.
In dynamic feature location based on formal concept analysis, it is a hard problem that modules implementing focused feature distribute dispersively among many formal concepts. For this reason, in locating, users must investigate many formal concepts. To improve the efficiency of this feature location technique, we propose two investigation strategies using a relevance metric. We use a similarity that is used in area of fault localization as a relevance metric. We applied our investigation strategies to several features of a representative open source software product and compared them with an existing strategy.
@article{maaki-sigss201607,
author = {中野 真明貴 and 林 晋平 and 小林 隆志},
title = {動的機能捜索における関連度と探索戦略},
journal = {電子情報通信学会技術研究報告},
volume = 116,
number = 127,
pages = {169--174},
year = 2016,
month = {jul},
}
[maaki-sigss201607]: as a page
In goal-oriented requirements analysis, goals specify multiple concerns such as functions, strategies, and non-functions, and they are refined into sub goals from mixed views of these concerns. This intermixture of concerns in goals makes it difficult for a requirements analyst to understand and maintain goal refinements. Separating concerns and specifying them explicitly is one of the useful approaches to improve the understandability of goal refinements, i.e., the relations between goals and their sub goals. In this paper, we propose a technique to annotate goals with the concerns they have in order to support the understanding of goal refinement. In our approach, goals are refined into sub goals referring to the annotated concerns, and these concerns annotated to a goal and its sub goals provide the meaning of its goal refinement. By tracing and focusing on the annotated concerns, requirements analysts can understand goal refinements and modify unsuitable ones. We have developed a supporting tool and made an exploratory experiment to evaluate the usefulness of our approach.
@incollection{hayashi-icsoft2015-ccis,
author = {Shinpei Hayashi and Wataru Inoue and Haruhiko Kaiya and Motoshi Saeki},
title = {Annotating Goals with Concerns in Goal-Oriented Requirements Engineering},
booktitle = {Software Technologies (Revised Selected Papers of ICSOFT 2015)},
pages = {269--286},
year = 2016,
month = {feb},
}
[hayashi-icsoft2015-ccis]: as a page
@misc{k_asano-ses2016,
author = {淺野 圭亮 and 林 晋平 and 佐伯 元司},
title = {ゴールグラフにおける不適切な詳細化の検出に向けて},
howpublished = {ソフトウェアエンジニアリングシンポジウム2016},
year = 2016,
month = {aug},
}
[k_asano-ses2016]: as a page
ソフトウェアアーキテクチャパターンに適合するコードの記述は保守コストの削減のために重要である.しかし,パターンで定義された制約に従うコードの記述は開発者の負担となり,実際にはこれに違反するコードが記述される.本稿では,アーキテクチャ適合のためのリファクタリングプロセスを支援するために,アーキテクチャ制約における違反コードを不吉な臭いとして検出する手法を提案する.提案手法はソースコードから抽出したコード片間の依存関係グラフ,およびアーキテクチャに則した推定規則を入力とし,グラフの各ノードに付加した所属可能なコンポーネントの集合の情報を段階的に更新していく.推定規則はコンポーネントの責務と依存制約を表現しており,周辺コード片の現推定状態から制約を満たさない各ノードのコンポーネント候補を除いていく.最終結果に現所属コンポーネントが含まれていない場合,そのコード片を違反として検出する.MVC2 アーキテクチャを対象として規則を定義し,Play Frameworkを用いたWebアプリケーション群に対して手法を適用したところ,高精度の検出結果を得た.
@incollection{minami-ses2016,
author = {陽 文樹 and 林 晋平 and 佐伯 元司},
title = {コンポーネントの責務と依存制約に基づくリファクタリング支援},
booktitle = {ソフトウェアエンジニアリングシンポジウム2016予稿集},
pages = {94--103},
year = 2016,
month = {aug},
}
[minami-ses2016]: as a page
@misc{umekawa-ses2016,
author = {梅川 尚孝 and 林 晋平 and 佐伯 元司},
title = {関連する識別子名の一括変更支援に向けて},
howpublished = {ソフトウェアエンジニアリングシンポジウム2016},
year = 2016,
month = {aug},
}
[umekawa-ses2016]: as a page
リファクタリングとは,ソフトウェアの外部的振る舞いを変化させることなく,内部の構造を改善するプロセスを指す.研究者・実務者ともに,開発プロジェクトにおいて過去に実施されたリファクタリングを知りたいという要求がある.そこで,リファクタリングの実施を自動的に検出する手法(リファクタリング検出手法)が数多く提案されている.これらの手法は,多様な国際会議や論文誌において発表されており,研究者や実務者にとって研究成果を概観することは容易ではない.本稿では,リファクタリング検出手法の中でも,盛んに研究が行われている成果物の変更履歴解析に基づく手法を中心に紹介を行う.まず,本稿におけるリファクタリング検出の定義および分類について述べる.その後,成果物の変更履歴解析に基づく手法を紹介し,今後行われる研究の方向性について考察を行う.
Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. Not only researchers but also practitioners need to know past instances of refactoring performed in a software development project. So far, a number of techniques have been proposed on the automatic detection of refactoring instances. Those techniques have been presented in various international conferences and journals, and it is difficult for researchers and practitioners to grasp the current status of studies on refactoring detection techniques. In this survey paper, we introduce refactoring detection techniques, especially in techniques based on change history analysis. At first, we give the definition and the categorization of refactoring detection in this paper, and then introduce refactoring detection techniques based on change history analysis. Finally, we discuss possible future research directions on refactoring detection.
@article{choi-jssst201502,
author = {崔 恩瀞 and 藤原 賢二 and 吉田 則裕 and 林 晋平},
title = {変更履歴解析に基づくリファクタリング検出技術の調査},
journal = {コンピュータソフトウェア},
volume = 32,
number = 1,
pages = {47--59},
year = 2015,
month = {feb},
}
[choi-jssst201502]: as a page
形式概念分析を用いた動的な機能捜索手法と呼び出し関係グラフ分割手法を組み合わせ,シナリオの用意が十分でない場合でも精度良く機能に対応するモジュール集合を得る方法について,例題への適用結果に基づき検討する.
@article{kato-ieicet201511,
author = {加藤 哲平 and 林 晋平 and 佐伯 元司},
title = {呼び出し関係グラフ分割手法の動的機能捜索手法との組合せの検討},
journal = {電子情報通信学会論文誌},
volume = {J98-D},
number = 11,
pages = {1374--1376},
year = 2015,
month = {nov},
}
[kato-ieicet201511]: as a page
統合開発環境における開発者の操作履歴を扱う手法について調査を行った.主としてソフトウェア進化研究における引用を想定し,細粒度なソースコード変更の内容を扱うことができる手法を調査対象とした.本論文では,これらの手法を,基盤となる統合開発環境,操作履歴記録手法,操作履歴応用手法の3層のモデルで捉える.これらの手法の特性を整理した後,それぞれの記録手法,応用手法の概要を紹介する.
This paper presents a survey on techniques to record and utilize developers’ operations on integrated development environments (IDEs). Especially, we let techniques treating fine-grained code changes be targets of this survey for reference in software evolution research. We created a three-tiered model to represent the relationships among IDEs, recording techniques, and application techniques. This paper also presents common features of the techniques and their details.
@article{omori-jssst201502,
author = {大森 隆行 and 林 晋平 and 丸山 勝久},
title = {統合開発環境における細粒度な操作履歴の収集および応用に関する調査},
journal = {コンピュータソフトウェア},
volume = 32,
number = 1,
pages = {60--80},
year = 2015,
month = {feb},
}
[omori-jssst201502]: as a page
@article{hayashi-jssst201502,
author = {林 晋平 and 川端 英之},
title = {特集「サーベイ論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 32,
number = 1,
pages = 2,
year = 2015,
month = {feb},
}
[hayashi-jssst201502]: as a page
In software configuration management using a version control system, developers have to follow the commit policy of the project. However, preparing changes according to the policy are sometimes cumbersome and time-consuming, in particular when applying large refactoring consisting of multiple primitive refactoring instances. In this paper, we propose a technique for re-organizing changes by recording editing operations of source code. Editing operations including refactoring operations are hierarchically managed based on their types provided by an integrated development environment. Using the obtained hierarchy, developers can easily configure the granularity of changes and obtain the resulting changes based on the configured granularity. We confirmed the feasibility of the technique by applying it to the recorded changes in a large refactoring process.
@inproceedings{jmatsu-iwpse2015,
author = {Jumpei Matsuda and Shinpei Hayashi and Motoshi Saeki},
title = {Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results},
booktitle = {Proceedings of the 14th International Workshop on Principles of Software Evolution},
pages = {19--27},
year = 2015,
month = {aug},
}
[jmatsu-iwpse2015]: as a page
To check the consistency between requirements specification documents and regulations by using a model checking technique, requirements analysts generate inputs to the model checker, i.e., state transition machines from the documents and logical formulas from the regulatory statements to be verified as properties. During these generation processes, to make the logical formulas semantically correspond to the state transition machine, analysts should take terminology matching where they look for the words in the requirements document having the same meaning as the words in the regulatory statements and unify the semantically same words. In this paper, by using case grammar approach, we propose an automated technique to reason the meaning of words in requirements specification documents by means of cooccurrence constraints on words in case frames, and to generate from regulatory statements the logical formulas where the words are unified to the words of the requirements documents. We have a feasibility study of our proposal with two case studies.
@inproceedings{nakamura-relaw2015,
author = {Ryotaro Nakamura and Yu Negishi and Shinpei Hayashi and Motoshi Saeki},
title = {Terminology Matching of Requirements Specification Documents and Regulations for Consistency Checking},
booktitle = {Proceedings of the 8th International Workshop on Requirements Engineering and Law},
pages = {10--18},
year = 2015,
month = {aug},
}
[nakamura-relaw2015]: as a page
In this paper, we propose a multi-dimensional extension of goal graphs in goal-oriented requirements engineering in order to support the understanding the relations between goals, i.e., goal refinements. Goals specify multiple concerns such as functions, strategies, and non-functions, and they are refined into sub goals from mixed views of these concerns. This intermixture of concerns in goals makes it difficult for a requirements analyst to understand and maintain goal graphs. In our approach, a goal graph is put in a multi-dimensional space, a concern corresponds to a coordinate axis in this space, and goals are refined into sub goals referring to the coordinates. Thus, the meaning of a goal refinement is explicitly provided by means of the coordinates used for the refinement. By tracing and focusing on the coordinates of goals, requirements analysts can understand goal refinements and modify unsuitable ones. We have developed a supporting tool and made an exploratory experiment to evaluate the usefulness of our approach.
@inproceedings{inouew-icsoft2015,
author = {Wataru Inoue and Shinpei Hayashi and Haruhiko Kaiya and Motoshi Saeki},
title = {Multi-Dimensional Goal Refinement in Goal-Oriented Requirements Engineering},
booktitle = {Proceedings of the 10th International Conference on Software Engineering and Applications},
pages = {185--195},
year = 2015,
month = {jul},
}
[inouew-icsoft2015]: as a page
This paper presents Historef, a tool for automatin edit history refactoring on Eclipse IDE for Java programs. The aim of our history refactorings is to improve the understandability and/or usability of the history without changing its whole effect. Historef enables us to apply history refactorings to the recorded edit history in the middle of the source code editing process by a developer. By using our integrated tool, developers can commit the refactored edits into underlying SCM repository after applying edit history refactorings so that they are easy to manage their changes based on the performed edits.
@inproceedings{hayashi-saner2015,
author = {Shinpei Hayashi and Daiki Hoshino and Jumpei Matsuda and Motoshi Saeki and Takayuki Omori and Katsuhisa Maruyama},
title = {Historef: A Tool for Edit History Refactoring},
booktitle = {Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering},
pages = {469--473},
year = 2015,
month = {mar},
}
[hayashi-saner2015]: as a page
Existing techniques have succeeded to help developers implement new code. However, they are insufficient to help to change existing code. Previous studies have proposed techniques to support bug fixes but other kinds of code changes such as function enhancements and refactorings are not supported by them. In this paper, we propose a novel system that helps developers change existing code. Unlike existing techniques, our system can support any kinds of code changes if similar code changes occurred in the past. Our research is still on very early stage and we have not have any implementation or any prototype yet. This paper introduces our research purpose, an outline of our system, and how our system is different from existing techniques.
@inproceedings{higo-msr2015,
author = {Yoshiki Higo and Akio Ohtani and Shinpei Hayashi and Hideaki Hata and Shinji Kusumoto},
title = {Toward Reusing Code Changes},
booktitle = {Proceedings of the 12th Working Conference on Mining Software Repositories},
pages = {372--376},
year = 2015,
month = {may},
}
[higo-msr2015]: as a page
Threats to existing systems help requirements analysts to elicit security requirements for a new system similar to such systems because security requirements specify how to protect the system against threats and similar systems require similar means for protection. We propose a method of finding potential threats that can be used for eliciting security requirements for such a system. The method enables analysts to find additional security requirements when they have already elicited one or a few threats. The potential threats are derived from several security targets (STs) in the Common Criteria. An ST contains knowledge related to security requirements such as threats and objectives. It also contains their explicit relationships. In addition, individual objectives are explicitly related to the set of means for protection, which are commonly used in any STs. Because we focus on such means to find potential threats, our method can be applied to STs written in any languages, such as English or French. We applied and evaluated our method to three different domains. In our evaluation, we enumerated all threat pairs in each domain. We then predicted whether a threat and another in each pair respectively threaten the same requirement according to the method. The recall of the prediction was more than 70% and the precision was 20 to 40% in three domains.
@inproceedings{kaiya-iccgi2015,
author = {Haruhiko Kaiya and Shinpei Ogata and Shinpei Hayashi and Motoshi Saeki and Takao Okubo and Nobukazu Yoshioka and Hironori Washizaki and Atsuo Hazeyama},
title = {Finding Potential Threats in Several Security Targets for Eliciting Security Requirements},
booktitle = {Proceedings of the 10th International Multi-Conference on Computing in the Global Information Technology},
pages = {83--92},
year = 2015,
month = {oct},
}
[kaiya-iccgi2015]: as a page
ゴール指向要求分析法は要求の構造理解や分解に有用であるが,システム実行時の振る舞いに関する要求を把握しづらい.プロブレムフレームはシステム動作や実行時の問題理解に焦点を当てるが,具体的な解決策を構築していくことは難しい.本論文では,これらの要求分析手法を組み合わせ,As-Is からTo-Be へのモデル化手法を提案する.また,このAs-Is モデルを評価するメトリクスを提案し,メトリクスを用いることによってAs-Is モデルの問題点の発見を試みる.
Since goal-oriented requirements engineering is useful for understanding the structure of requirements and refining them, it cannot obtain enough requirements of system behavior. Also, since problem-oriented requirements analysis is used for understanding the requirements of system behavior, it cannot be used for refining and determining a To-Be system from an As-Is system. In this papaer, we propose an approach for modeling a To-Be system from an As-Is system using these methodologies. Also, we propose metrics that can be used to evaluate As-Is systems and to detect their problems.
@article{ito-sigss201507,
author = {伊藤 翔一朗 and 林 晋平 and 佐伯 元司},
title = {融合ゴール指向要求分析法におけるメトリクスを用いたAs-Isモデルの問題点発見手法},
journal = {電子情報通信学会技術研究報告},
volume = 115,
number = 153,
pages = {155--160},
year = 2015,
month = {jul},
}
[ito-sigss201507]: as a page
複雑化する分散型版管理システムに対する操作を支援する上で,開発者による版管理システム操作に対する解析が十分に進んでいない.また分散型版管理システムでは集中型版管理システムと異なり,開発者間で共有される履歴の他に,開発者のローカル環境にのみ存在する履歴及び書き換えられて失われる履歴が存在する.これらの履歴の違いや開発者による版管理システムに対する操作を調査するため,我々は分散型版管理システムを利用する開発者のマシン上で観測される履歴及び対版管理システム操作を記録するツールを試作した.
Although handling Distributed Version Control Systems (D-VCSs) is a complicated process for develop- ers, its empirical analysis is not sufficiently achieved. In particular, contrary to Centralized Version Control Systems (C-VCSs) only manage shared repositories, D-VCSs manage change histories only on a local repository and rewrite them without sharing with other developers. We have implemented a prototype tool for logging operations on the repository on D-VCS in order to explore such hidden histories and the operation for D-VCS by developers.
@article{jmatsu-sigss201507,
author = {松田 淳平 and 林 晋平 and 佐伯 元司},
title = {分散型版管理リポジトリでの作業履歴記録ツールの試作},
journal = {電子情報通信学会技術研究報告},
volume = 115,
number = 153,
pages = {45--50},
year = 2015,
month = {jul},
}
[jmatsu-sigss201507]: as a page
プレファクタリングの適用箇所を特定するため,ソースコード中の不吉な臭いの検出器が提案されている.しかし,既存の不吉な臭い検出器は,開発者の現在の開発コンテキストを考慮せず,関連する臭いと関連しない臭いを混在させて出力するため,特定のコンテキストに従っている開発者には適さない.その結果,開発者は適する臭いを特定するための時間を必要とするという課題がある.本稿では,不吉な臭いを開発者の持つコンテキストに従い優先順位付けする手法を提案する.我々は,イシュー管理システムに登録された,次のリリースまでに解決すべきイシューの一覧を開発のコンテキストと見なす.提案手法では,イシューの説明文に対して機能捜索手法を適用して得た結果のモジュール一覧を用いて,コンテキストに関連付く臭いを特定する.コンテキストに関連付く度合いによる優先順位に基づき,不吉な臭い検出器の出力を並べ替えて出力する.本稿では,オープンソースプロジェクトを用いて行った提案手法の予備評価についても述べる.
In order to find the opportunities for applying prefactoring, several techniques for detecting bad smells in source code have been proposed. However, existing smell detectors are often not suitable for developers who have a specific context because these detectors do not consider their current context and output the results that are mixed with both smells that are and are not related to such context. Consequently, the developers have to spend a considerable amount of time identifying relevant smells. In this paper, we propose a technique to prioritize bad code smells by using developers' context, i.e., a list of issues in an issue tracking system that needs to be implemented before next release. We applied feature location technique to the list of issues and used the results to specify which smells are associated with the context. Thus, our approach can provide the developers with a list of prioritized bad code smells that is related to their current context. Several preliminary evaluations using open source project indicated the effectiveness of our technique.
@article{natthawute-sigss201507,
author = {セーリム ナッタウット and 林 晋平 and 佐伯 元司},
title = {プレファクタリングのための不吉な臭いの検出結果の優先順位付け},
journal = {電子情報通信学会技術研究報告},
volume = 115,
number = 153,
pages = {33--38},
year = 2015,
month = {jul},
}
[natthawute-sigss201507]: as a page
自然言語で書かれた要求文と規則の整合性をモデル検査で検査するためには,要求文を状態遷移モデルに,規則を検査式に変換する必要がある.この際,状態遷移モデルと検査式を対応づけるには,要求文と規則の語彙マッチングを行わなければならない.本稿では,用意した格フレーム辞書と要求文とのマッチング法を開発し,これによる規則への整合性検査手法を提案する.提案手法は,類義語,同義語,上位下位語を処理し,要求文の単語と格スロットに入るべき単語の意味的制約に基づくマッチングを行う.これにより要求文を格フレーム化した後,要求文の意味を踏まえた状態遷移モデル及び該当モデルに対応する規則の検査式を生成する.要求文と規則文に提案手法を適用し有用性を評価した.
When developers check the consistency between requirements specification documents and regulations by Model checking, they need state transition models of the documents and logic specifications of regulations. Moreover, they have to know which words in documents correspond to words in applicable regulations in respect of meaning so that they create logic specifications. In this paper, we propose a technique to reason the meaning of words in requirements specification documents by using co-occurrence restrictions in case frames and to create state transition models based on the reasoned meaning and logic specifications of applicable regulations. These specifications are created from these reasoned meaning and logic expressions which contain case frames as factors and express regulations.Our proposal is evaluated with a case study.
@article{nakamura-sigse201503,
author = {中村 遼太郎 and 林 晋平 and 佐伯 元司},
title = {自然言語で書かれた要求文の規則への整合性検査手法},
journal = {情報処理学会研究報告},
volume = {2015-SE-187},
number = 18,
pages = {1--8},
year = 2015,
month = {mar},
}
[nakamura-sigse201503]: as a page
ソースコード編集履歴の理解性や利用性を向上させるための履歴リファクタリング手法が提案されている.しかし,既存手法ではどのような編集履歴をどのようにリファクタリングすべきか明確でない.本稿ではリファクタリングが必要となる履歴の特徴を「履歴の臭い」として定義し,また,履歴の臭いを判別するためのメトリクスを提案する.提案したメトリクスによって各編集操作の結びつきを捉え,臭いの自動検出を可能とする.検出の精度について評価を行い,適合度0.86 など有用な結果を得た.
History refactorings that improve the understandability and usability of a history of source code have been proposed. However, the proposed technique has not dene where and how to refactor a history. We dene bad smells in history and metrics for detecting them. Identifying the relationship between editing operations in a history by using the proposed metrics leads to automated detection of bad smells in history. We conrmed that our detection technique is useful due to its high accuracy.
@article{dhoshino-sigse201503,
author = {星野 大樹 and 林 晋平 and 佐伯 元司},
title = {ソースコード編集履歴の不吉な臭いの検出},
journal = {情報処理学会研究報告},
volume = {2015-SE-187},
number = 9,
pages = {1--8},
year = 2015,
month = {mar},
}
[dhoshino-sigse201503]: as a page
システム設計段階において,あらかじめシステムに存在するセキュリティ脅威を検出し対策を講じることによって,より安全で高信頼度のシステムを開発することができる.しかし,脅威の検出と対策にはセキュリティに関する知識が必要であったり,見落としを極力減らす必要があったりすることから,コストや時間がかかる.そこで本稿では,入力されたシナリオシーケンスに対し,あらかじめ知識として保持している検出と対策のためのパターンとの比較によって,対象システムに発生しうる脅威を検出し,ミスシナリオと脅威が対策されたシナリオ,それを実現するためのセキュリティ機能を提示する手法を提案する.シナリオ記述及びパターンの記述において,セキュリティドメインに基づくプロファイルに従ったUML シーケンス図を利用し,パターン作成のための知識として,コモンクライテリアで定められているセキュリティターゲットを用いる.本稿では適用事例を用いて手法の有用性を確認した.
Detecting and mitigating security threats of information systems in design phase helps to make them secure. However, the more threats we try to detect and mitigate, the more cost and knowledge of security threats are required. In this paper, we present a technique to detect security threats, show negative scenarios, mitigated scenarios and their security functions with comparing normal scenarios of a business process and the patterns created from knowledge of security. The scenarios of a business process are described with sequence diagrams. The knowledge is extracted from the documents called Security Target compliant to the international standard Common Criteria. We show the usefulness of our approach with several case studies.
@article{abe-sigse201503,
author = {阿部 達也 and 林 晋平 and 佐伯 元司},
title = {セキュリティターゲットを活用したセキュリティ機能要求獲得支援法},
journal = {情報処理学会研究報告},
volume = {2015-SE-187},
number = 17,
pages = {1--8},
year = 2015,
month = {mar},
}
[abe-sigse201503]: as a page
本稿では,2015年3月にモントリオールで開催されたInternational Conference on Software Analysis, Evolution, and Reengineering(SANER 2015)の内容について報告する.
@article{matsumoto-sigse201506,
author = {松本 卓大 and 三浦 圭裕 and 崔 恩瀞 and 伊原 彰紀 and 林 晋平},
title = {第22回SANERの参加報告},
journal = {情報処理学会研究報告},
volume = {2015-SE-188},
number = 10,
pages = {1--2},
year = 2015,
month = {jun},
}
[matsumoto-sigse201506]: as a page
In order to develop secure information systems with less development cost, it is important to elicit the requirements to security functions (simply security requirements) as early in their development process as possible. To achieve it, accumulated knowledge of threats and their objectives obtained from practical experiences is useful, and the technique to support the elicitation of security requirements utilizing this knowledge should be developed. In this paper, we present the technique for security requirements elicitation using practical knowledge of threats, their objectives and security functions realizing the objectives, which is extracted from Security Target documents compliant to the standard Common Criteria. We show the usefulness of our approach with several case studies.
@inproceedings{abe-mreba2015,
author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki},
title = {Modeling and Utilizing Security Knowledge for Eliciting Security Requirements},
booktitle = {Proceedings of the 2nd International Workshop on Conceptual Modeling in Requirements and Business Analysis},
pages = {236--247},
year = 2015,
month = {oct},
}
[abe-mreba2015]: as a page
@misc{maruyama-fose2015,
author = {丸山 勝久 and 林 晋平 and 吉田 則裕 and 崔 恩瀞},
title = {フレームベースリファクタリング ~その概念と意義~},
howpublished = {In 第22回ソフトウェア工学の基礎ワークショップ},
year = 2015,
month = {nov},
}
[maruyama-fose2015]: as a page
ソフトウェア構成管理において,開発者はしばしば,ソフトウェア開発履歴に記録される変更の理解性や利用性の向上を目的として,履歴の改変を行う.本稿では,履歴をより適した形に改変するための支援手法や自動化手法の開発を目指し,履歴改変の実例を収集・分析する試みについて述べる.
@inproceedings{hayashi-ncjssst2015,
author = {林 晋平 and 佐伯 元司},
title = {ソフトウェア開発履歴の改変例の分析に向けて},
booktitle = {日本ソフトウェア科学会第32回大会 予稿集},
year = 2015,
month = {sep},
}
[hayashi-ncjssst2015]: as a page
ソフトウェア構成管理では,ソースコード変更を開発者にとって意味のある単位ごとに分割してコミットを行うこと(分割コミット)が重要となる.しかし開発者は分割コミットを行わずに様々な意図が混ざったコミットをすることがある.これに対し,ソースコードの編集操作を収集し,それらを分類することで分割コミットを支援する既存手法がある.本稿では,ソースコードにおけるファイルやクラス,メソッド,コメントおよび編集時間を分類の基準とし,これらに基づいて編集操作を自動的に分類する手法を提案する.提案手法を実現するツールを開発し,例題へ適用することにより,既存手法よりも分類の労力を低減することを確認した.
In software configuration management, it is important to separate source code changes into meaningful units before committing them (in short, Task Level Commit). However, developers often commit unrelated code changes in a single transaction. To support Task Level Commit, an existing technique uses an editing history of source code and enables developers to group the editing operations in the history. This paper proposes an automated technique for grouping editing operations in a history based on several criteria including source files, classes, methods, comments, and times editted. We show how our technique reduces developers' separating cost compared with the manual approach.
@article{dhoshino-jssst201408,
author = {星野 大樹 and 林 晋平 and 佐伯 元司},
title = {ソースコード編集操作の自動グループ化},
journal = {コンピュータソフトウェア},
volume = 31,
number = 3,
pages = {277--283},
year = 2014,
month = {aug},
}
[dhoshino-jssst201408]: as a page
本研究は,有望な要求分析手法の1つであるゴール指向要求分析法について,分析の成果物であるゴールグラフの品質を向上させることを目的としている.本論文では,作成したゴールグラフから低品質なゴールを発見し,改善させることが現実的であると考え,低品質なゴールを同定できるようにするためにゴールグラフの各ゴールに対する品質特性を定義する.品質特性は,要求仕様書が備えるべき品質特性の国際標準であるIEEE Std 830 を参考にした.また,各ゴールがそれぞれの品質特性を満たすか否かを判定する品質特性述語を,属性つきゴールグラフの属性を用いて定義する.さらに我々が定義した品質特性述語を用いて,品質特性を満たさないゴールをゴールグラフの品質を下げる可能性があると見なして分析者に提示する支援ツールを実装した.この支援ツールを用いて,ゴールグラフを書き換えさせる実験を行った.この実験により,我々が定義したゴールに対する品質特性と品質特性述語が品質的に問題のあるゴールの発見と修正を行うために役立つことを示す.
Goal-oriented requirements analysis (GORA) is a promising technique in requirements engineering, especially requirements elicitation. This paper aims at developing a technique to support the improvement of goal graphs, which are resulting artifacts of GORA. We consider that the technique of improving existing goals of lower quality is more realistic rather than that of creating a goal graph of high quality from scratch. To achieve the proposed technique, we define quality properties for each goal formally. Our quality properties result from IEEE Std 830 and past related studies. To define them formally, using attribute values of an attributed goal graph, we formulate predicates for deciding if a goal satisfies a quality property or not. We have implemented a supporting tool to show a requirements analyst the goals which do not satisfy the predicates. Our experiments using the tool show that requirements analysts can efficiently find and modify the qualitatively problematic goals.
@article{ugai-ipsjj201402,
author = {鵜飼 孝典 and 林 晋平 and 佐伯 元司},
title = {属性つきゴールグラフにおけるゴールの品質特性},
journal = {情報処理学会論文誌},
volume = 55,
number = 2,
pages = {893--908},
year = 2014,
month = {feb},
}
[ugai-ipsjj201402]: as a page
@article{kawabata-jssst201402,
author = {川端 英之 and 林 晋平 and 滝本 宗宏},
title = {特集「サーベイ論文」の編集にあたって},
journal = {コンピュータソフトウェア},
volume = 31,
number = 1,
pages = 2,
year = 2014,
month = {feb},
}
[kawabata-jssst201402]: as a page
Software visualization has become a major technique in program comprehension. Although many tools visualize the structure, behavior, and evolution of a program, they have no concern with how a tool user has understood it. Moreover, they miss the stuff the user has left through trial-and-error processes of his/her program comprehension task. This paper presents a source code visualization tool called CodeForest. It uses a forest metaphor to depict source code of Java programs. Each tree represents a class within the program and the collection of trees constitutes a three-dimensional forest. CodeForest helps a user to try a large number of combinations of mapping of software metrics on visual parameters. Moreover, it provides two new types of support: leaving notes that memorize the current understanding and insight along with visualized objects, and automatically recording a user's actions under understanding. The left notes and recorded actions might be used as historical data that would be hints accelerating the current comprehension task.
@inproceedings{maruyama-icpc2014,
author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi},
title = {A Visualization Tool Recording Historical Data of Program Comprehension Tasks},
booktitle = {Proceedings of the 22nd International Conference on Program Comprehension},
pages = {207--211},
year = 2014,
month = {jun},
}
[maruyama-icpc2014]: as a page
We formulate the class responsibility assignment (CRA) problem as the fuzzy constraint satisfaction problem (FCSP) for automating CRA of high quality. Responsibilities are contracts or obligations of objects that they should assume, by aligning them to classes appropriately, quality designs realize. Typical conditions of a desirable design are having a low coupling between highly cohesive classes. However, because of a trade-off among such conditions, solutions that satisfy the conditions moderately are desired, and computer assistance is needed. Additionally, if we have an initial assignment, the improved one by our technique should keep the original assignment as much as possible because it involves with the intention of human designers. We represent such conditions as fuzzy constraints, and formulate CRA as FCSP. That enables us to apply common FCSP solvers to the problem and to derive solution representing a CRA. The conducted preliminary evaluation indicates the effectiveness of our technique.
@inproceedings{hayashi-iwesep2014,
author = {Shinpei Hayashi and Takuto Yanagida and Motoshi Saeki and Hidenori Mimura},
title = {Class Responsibility Assignment as Fuzzy Constraint Satisfaction},
booktitle = {Proceedings of the 6th International Workshop on Empirical Software Engineering in Practice},
pages = {19--24},
year = 2014,
month = {nov},
}
[hayashi-iwesep2014]: as a page
A basic clue of feature location available to developers is a description of a feature written in a natural language. However, a description of a feature does not clearly specify the boundary of the feature, while developers tend to locate the feature precisely by excluding marginal modules that are likely outside of the boundary. This paper addresses a question: does a clearer description of a feature enable developers to recognize the same sets of modules as relevant to the feature? Based on the conducted experiment with subjects, we conclude that different descriptions lead to a different set of modules.
@inproceedings{hayashi-aoasia2014,
author = {Shinpei Hayashi and Takashi Ishio and Hiroshi Kazato and Tsuyoshi Oshima},
title = {Toward Understanding How Developers Recognize Features in Source Code from Descriptions},
booktitle = {Proceedings of the 9th International Workshop on Advanced Modularization Techniques},
pages = {1--3},
year = 2014,
month = {nov},
}
[hayashi-aoasia2014]: as a page
@misc{hayashi-jssm201403,
author = {林 晋平},
title = {コモンクライテリアを用いたセキュリティ要求の獲得},
howpublished = {In 日本セキュリティ・マネジメント学会 先端技術・情報犯罪とセキュリティ研究会(招待講演)},
year = 2014,
month = {mar},
}
[hayashi-jssm201403]: as a page
機能捜索(feature location)はソフトウェアの機能とその実装箇所を対応付ける作業であり,ソフトウェアの理解や変更のための基礎となる.既存の機能捜索手法は分析者が機能を適切に認識していることを前提としており,認識があいまいな状況では十分な効果が得られない.本稿では,形式概念分析に基づく既存の動的な機能捜索手法を拡張し,機能の捜索と識別を反復的に改善する手法を提案する.例題のWeb アプリケーションに適用し,提案手法が正常シナリオのみを用いた機能捜索結果からの代替シナリオの発見に役立つことを示した.
Feature location (FL) is an important activity for finding correspondence between features and modules in source code. However, recognizing features correctly is an inevitable prerequisite for existing FL techniques; otherwise the FL would end up with insufficient or incorrect modules for the features. In this paper, we propose an incremental technique for locating and identifying features by extending an existing dynamic FL technique based on formal concept analysis. We have applied the technique to an example web application and showed that the suggestions from our technique were useful for identifying alternative scenarios when the dynamic FL technique only covers its successful scenarios.
@article{kazato-sigss201401,
author = {風戸 広史 and 林 晋平 and 小林 隆志 and 大島 剛志 and 宮田 俊介 and 夏川 勝行 and 星野 隆 and 佐伯 元司},
title = {反復的なソフトウェア機能捜索・識別の例題への適用},
journal = {電子情報通信学会技術研究報告},
volume = 113,
number = 422,
pages = {119--124},
year = 2014,
month = {jan},
}
[kazato-sigss201401]: as a page
リファクタリングを適用すべき箇所を特定するために,ソースコード中の不吉な臭いを検出する手法がこれまでに提案されている.しかし,特定の機能を実装しようとしている開発者にとっては,現在のソースコード全体にわたって臭いを検出する既存手法の検出結果は適さない.本稿では,注目する機能の実装に関連する臭いを検出することにより,実装を容易にするために実装前にプログラムの構造を改善するプレファクタリングを支援する手法を提案する.提案手法では機能実装により起こる設計の劣化の度合いを機能実装前に推測するために,機能捜索手法によって得られたモジュール群に対して,機能実装によって引き起こる設計劣化を模倣するダミーコードを挿入する.ダミーコード挿入前後でのソースコードを臭い検出器に適用し,得られた臭いの検出結果を比較することで,対象としている機能の実装に強く関連する臭いを特定する.いくつかの予備評価により,提案手法が有効に機能する場合があることを確認した.
In order to find the opportunities for applying refactoring, several techniques for detecting bad smells in source code have been proposed. However, existing smell detectors are not suitable for developers who are trying to implement a specific feature because the detectors detect too many smells from the whole source code. In this paper, we propose a technique to detect bad smells specific to the focused feature for supporting prefactoring to improve the extendibility of the program before implementing the feature. In order to estimate the effect of the feature introduction before implementing it, dummy code imitating the deterioration of the design quality is inserted to the modules obtained using the result of a feature location technique. Comparing the detected smells in source code before and after inserting dummy code, we can specify which smells are strongly related to the target feature. Several preliminary evaluations indicated the effectiveness of our technique.
@article{komatsuda-sigss201407,
author = {小松田 卓也 and 林 晋平 and 佐伯 元司},
title = {機能捜索結果を用いたプレファクタリング支援},
journal = {電子情報通信学会技術研究報告},
volume = 114,
number = 127,
pages = {109--114},
year = 2014,
month = {jul},
}
[komatsuda-sigss201407]: as a page
ソースコードの修正時には,各手順に対応するモジュールを特定する機能捜索が必要となる.しかし,既存の機能捜索手法は機能を構成する概念間の関係を考慮しておらず,高精度で各概念に対応するモジュールを特定することが難しい.本研究では,機能を構成する下位概念間の依存関係を用いて機能捜索を行う手法を提案する.各下位概念に対応する記述を入力として既存の概念捜索手法を適用し,対応するモジュールの一覧を得る.その後,下位概念間に存在する依存関係を満たさないモジュールを一覧から除外することで,得られた一覧の精度を向上させる.既存手法との比較実験により,提案手法の有用性の評価を行った.
Since existing feature location techniques lack the relationship between concepts in a feature, it is hard to locate the concepts in source code of high accuracy. This paper proposes a technique to locate the concepts in a feature using the dependency between the concepts. We apply an existing concept location technique to the description for each concept and obtain the list of modules. Some of modules which do not match the dependency between concepts are filtered out, and we can obtain more precise list of modules. The experiment shows the effectiveness of our technique.
@article{kato-sigse201403,
author = {加藤 哲平 and 林 晋平 and 佐伯 元司},
title = {下位概念間の依存関係を用いた機能捜索},
journal = {情報処理学会研究報告},
volume = {2014-SE-183},
number = 17,
pages = {1--8},
year = 2014,
month = {mar},
}
[kato-sigse201403]: as a page
ユースケース記述は自然言語で記述されるため,内容をコンピュータで分析することは難しい.そこで,ユースケース記述に書かれた仕様を状態遷移モデルに変換しモデルチェッカで検査するために,ユースケース記述の各文を格フレームに変換する手法を提案する.提案手法では,まず,ユースケース記述の自然言語記述の係り受け解析を行う.次に,係り受け解析結果と辞書で定義される複数の格フレーム候補を照合し,入力文の格構造に対応する格フレームと考えられる順に格フレーム候補を順序付けて出力する.事例研究として,提案手法と入力文の格フレームから状態遷移モデルへの変換手法を用いて状態遷移モデルを生成し,モデル検査器に適用した.その結果,ユースケース記述が満たす性質と満たさない性質を判別することができた.
Since use case descriptions written in a natural language are informal, it is difficult to analyze them automatically. To check whether descriptions satisfy their requirements by a model checker, this paper proposes a method to generate a case frame from a sentence in descriptions. First, we extract a verb from a sentence and find case frames associated with the verb from a dictionary. Then, we select the most adaptable one based on the analysis results of the sentence. We have implemented a supporting tool for the method. A case study for applying the tool and for translating the obtained case frames into a state transition model shows the feasibility of the method.
@article{nakamura-sigss201403,
author = {中村 遼太郎 and 林 晋平 and 佐伯 元司},
title = {ユースケース記述の検査のための自然言語要求文の解析},
journal = {電子情報通信学会技術研究報告},
volume = 113,
number = 489,
pages = {25--30},
year = 2014,
month = {mar},
}
[nakamura-sigss201403]: as a page
ソフトウェア進化を実践する上での指針や慣例がソフトウェア進化パターンである.我々は,Demeyerらのオブジェクト指向リエンジニアリングパターンを補完することを目的に,産学連携でパターンの収集を試みた.本稿では,その結果として,ソフトウェアプロダクトライン,コードクローン,ソフトウェア変更支援,プログラム理解支援,リファクタリングプロセスに関する進化パターンを提案する.
@article{maruyama-sigse201405,
author = {丸山 勝久 and 沢田 篤史 and 小林 隆志 and 大森 隆行 and 林 晋平 and 飯田 元 and 吉田 則裕 and 角田 雅照 and 岩政 幹人 and 今井 健男 and 遠藤 侑介 and 村田 由香里 and 位野木 万里 and 白石 崇 and 長岡 武志 and 林 千博 and 吉村 健太郎 and 大島 敬志 and 三部 良太 and 福地 豊},
title = {産学連携によるソフトウェア進化パターン収集の試み},
journal = {情報処理学会研究報告},
volume = {2014-SE-184},
number = 1,
pages = {1--8},
year = 2014,
month = {may},
}
[maruyama-sigse201405]: as a page
本稿では,ソースコード編集履歴における不吉な臭いとその検出法について議論する.ソースコード編集履歴の理解性や利用性を向上させるために編集履歴をリファクタリングする手法があるが,どのような履歴にどのようなリファクタリングが必要なのかは明確でない.現在我々が行っているソフトウェア開発プロジェクトにおいて蓄積された編集履歴の分析を行い,リファクタリングが必要となる履歴の特徴を「履歴の臭い」とし,臭いを検出する試みについて述べる.
@incollection{dhoshino-ses2014,
author = {星野 大樹 and 林 晋平 and 佐伯 元司},
title = {ソースコード編集履歴の不吉な臭いの検出に向けて},
booktitle = {ソフトウェアエンジニアリングシンポジウム2014予稿集},
pages = {210--211},
year = 2014,
month = {sep},
}
[dhoshino-ses2014]: as a page
ユースケースが示唆する機能要求が,法律をはじめとする規則を遵守するかについて,モデル検査によって検査する手法を述べる.ユースケースのドメインと規則の知識表現をもとに,ユースケースから,ユースケースに関わる規則の検査式と状態遷移モデルを生成する.生成された状態遷移モデルと検査式に,モデル検査を適用した結果から,機能要求が規則を遵守しているかを判定する.
@incollection{nakamura-ses2014,
author = {中村 遼太郎 and 林 晋平 and 佐伯 元司},
title = {ユースケース記述の規則への整合性検査に向けて},
booktitle = {ソフトウェアエンジニアリングシンポジウム2014予稿集},
pages = {192--193},
year = 2014,
month = {sep},
}
[nakamura-ses2014]: as a page
ゴール指向とプロブレムフレームを用いた,As-IsシステムからTo-Beシステムへのモデル化手法を提案する.本手法ではまずユースケースやゴールグラフ,プロブレムフレームのコンテクスト図などを元にAs-Isを記述および分析し,それらのモデルを用いて問題点を抽出する.次にその問題点を解決するためのゴールを設定しゴールグラフを構築,このグラフを用いてAs-Isのゴールグラフやコンテクスト図を修正してTo-Beモデルを構築していく.本稿では例題としてWebサービスのセキュリティシステムをとりあげて,手法の解説を行う.
@incollection{ito-ses2014,
author = {伊藤 翔一朗 and 林 晋平 and 佐伯 元司},
title = {ゴール指向とプロブレムフレームの融合},
booktitle = {ソフトウェアエンジニアリングシンポジウム2014予稿集},
pages = {204--205},
year = 2014,
month = {sep},
}
[ito-ses2014]: as a page
版管理システムを用いたソフトウェア開発において,コミットポリシーに従ったコミットは開発者らにとって有益である.しかしポリシーに従ったコミットの構成は開発者の負担となるため,その支援が望まれる.提案手法では,ポリシーに従った粒度のコミットをソースコード編集後に得るために,開発者による編集操作履歴を,開発環境が提供する編集の種類を用いて階層的に管理する.開発者が階層の構成要素である節を指定することにより,対応する粒度のコミットの列を,変更全体の整合性を維持したまま構成することができる.手動編集を含む大きなリファクタリングを適用したソースコードと記録した編集操作履歴に対し,本手法を適用した結果,複数のポリシーが定める粒度のコミットをそれぞれ適切に構成することができた.
@incollection{jmatsu-ses2014,
author = {松田 淳平 and 林 晋平 and 佐伯 元司},
title = {編集操作履歴の階層的なグループ化を用いたポリシー準拠のコミットの構築},
booktitle = {ソフトウェアエンジニアリングシンポジウム2014予稿集},
pages = {76--84},
year = 2014,
month = {sep},
}
[jmatsu-ses2014]: as a page
本稿では,システム利用シナリオからセキュリティ脅威を検出しミスシナリオを出力する手法と,Security Targetの記述から脅威の対策を抽出する手法を組み合わせることで,ミスシナリオに対応する脅威の対策がなされたシナリオを出力し,セキュリティ要求獲得を支援する手法について述べる.
@incollection{abe-ses2014,
author = {阿部 達也 and 林 晋平 and 佐伯 元司},
title = {システム利用シナリオからのセキュリティ脅威の検出と対策シナリオの導出に向けて},
booktitle = {ソフトウェアエンジニアリングシンポジウム2014予稿集},
pages = {206--207},
year = 2014,
month = {sep},
}
[abe-ses2014]: as a page
Goal-oriented requirements analysis (GORA) is one of the promising techniques to elicit software requirements, and it is natural to consider its application to security requirements analysis. In this paper, we proposed a method for goal-oriented security requirements analysis using security knowledge which is derived from several security targets (STs) compliant to Common Criteria (CC, ISO/IEC 15408). We call such knowledge security ontology for an application domain (SOAD). Three aspects of security such as confidentiality, integrity and availability are included in the scope of our method because the CC addresses these three aspects. We extract security-related concepts such as assets, threats, countermeasures and their relationships from STs, and utilize these concepts and relationships for security goal elicitation and refinement in GORA. The usage of certificated STs as knowledge source allows us to reuse efficiently security-related concepts of higher quality. To realize our proposed method as a supporting tool, we use an existing method GOORE (goal-oriented and ontology-driven requirements elicitation method) combining with SOAD. In GOORE, terms and their relationships in a domain ontology play an important role of semantic processing such as goal refinement and conflict identification. SOAD is defined based on concepts in STs. In contrast with other goal-oriented security requirements methods, the knowledge derived from actual STs contributes to eliciting security requirements in our method. In addition, the relationships among the assets, threats, objectives and security functional requirements can be directly reused for the refinement of security goals. We show an illustrative example to show the usefulness of our method and evaluate the method in comparison with other goal-oriented security requirements analysis methods.
@article{saeki-ijseke201306,
author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
title = {Enhancing Goal-Oriented Security Requirements Analysis Using Common Criteria-Based Knowledge},
journal = {International Journal of Software Engineering and Knowledge Engineering},
volume = 23,
number = 5,
pages = {695--720},
year = 2013,
month = {jun},
}
[saeki-ijseke201306]: as a page
Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if changes are tangled with refactorings in a single revision, then the resulting source code differences are more complicated. We propose an interactive difference viewer which enables us to separate refactoring effects from source code differences for improving the understandability of the differences.
@inproceedings{hayashi-wcre2013,
author = {Shinpei Hayashi and Sirinut Thangthumachit and Motoshi Saeki},
title = {REdiffs: Refactoring-Aware Difference Viewer for Java},
booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering},
pages = {487--488},
year = 2013,
month = {oct},
}
[hayashi-wcre2013]: as a page
Feature location is an activity to identify correspondence between features in a system and program elements in source code. After a feature is located, developers need to understand implementation structure around the location from static and/or behavioral points of view. This paper proposes a semi-automatic technique both for locating features and exposing their implementation structures in source code, using a combination of dynamic analysis and two data analysis techniques, sequential pattern mining and formal concept analysis. We have implemented our technique in a supporting tool and applied it to an example of a web application. The result shows that the proposed technique is not only feasible but helpful to understand implementation of features just after they are located.
@inproceedings{kazato-apsec2013,
author = {Hiroshi Kazato and Shinpei Hayashi and Tsuyoshi Oshima and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
title = {Extracting and Visualizing Implementation Structure of Features},
booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference},
pages = {476--484},
year = 2013,
month = {dec},
}
[kazato-apsec2013]: as a page
The elicitation of security requirements is a crucial issue to develop secure business processes and information systems of higher quality. Although we have several methods to elicit security requirements, most of them do not provide sufficient supports to identify security threats. Since threats do not occur so frequently, like exceptional events, it is much more difficult to determine the potentials of threats exhaustively rather than identifying normal behavior of a business process. To reduce this difficulty, accumulated knowledge of threats obtained from practical setting is necessary. In this paper, we present the technique to model knowledge of threats as patterns by deriving the negative scenarios that realize threats and to utilize them during business process modeling. The knowledge is extracted from Security Target documents, based on the international Common Criteria Standard, and the patterns are described with transformation rules on sequence diagrams. In our approach, an analyst composes normal scenarios of a business process with sequence diagrams, and the threat patterns matched to them derives negative scenarios. Our approach has been demonstrated on several examples, to show its practical application.
@inproceedings{abe-apsec2013,
author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki},
title = {Modeling Security Threat Patterns to Derive Negative Scenarios},
booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference},
pages = {58--66},
year = 2013,
month = {dec},
}
[abe-apsec2013]: as a page
Feature location (FL) in source code is an important task for program understanding. Existing dynamic FL techniques depend on sufficient scenarios for exercising the features to be located. However, it is difficult to prepare such scenarios because it involves a correct understanding of the features. This paper proposes an incremental technique for refining the identification of features integrated with the existing FL technique using formal concept analysis. In our technique, we classify the differences of static and dynamic dependencies of method invocations based on their relevance to the identified features. According to the classification, the technique suggests method invocations to exercise unexplored part of the features. An application example indicates the effectiveness of the approach.
@inproceedings{kazato-csmr2013,
author = {Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Tsuyoshi Oshima and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
title = {Incremental Feature Location and Identification in Source Code},
booktitle = {Proceedings of the 17th European Conference on Software Maintenance and Reengineering},
pages = {371--374},
year = 2013,
month = {mar},
}
[kazato-csmr2013]: as a page
Automated feature location techniques have been proposed to extract program elements that are likely to be relevant to a given feature. A more accurate result is expected to enable developers to perform more accurate feature location. However, several experiments assessing traceability recovery have shown that analysts cannot utilize an accurate traceability matrix for their tasks. Because feature location deals with a certain type of traceability links, it is an important question whether the same phenomena are visible in feature location or not. To answer that question, we have conducted a controlled experiment. We have asked 20 subjects to locate features using lists of methods of which the accuracy is controlled artificially. The result differs from the traceability recovery experiments. Subjects given an accurate list would be able to locate a feature more accurately. However, subjects could not locate the complete implementation of features in 83% of tasks. Results show that the accuracy of automated feature location techniques is effective, but it might be insufficient for perfect feature location.
@inproceedings{ishio-wcre2013,
author = {Takashi Ishio and Shinpei Hayashi and Hiroshi Kazato and Tsuyoshi Oshima},
title = {On the Effectiveness of Accuracy of Automated Feature Location Technique},
booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering},
pages = {381--390},
year = 2013,
month = {oct},
}
[ishio-wcre2013]: as a page
本稿では,Feature Location手法評価のためのベンチマークとして改版履歴に基づくものを取り上げ,その特徴や問題点を分析しながら,ベンチマーク作成の課題について議論する.
This paper discusses issues and challenges in feature location benchmarks using an existing benchmark based on revision histories.
@inproceedings{hayashi-wws2013,
author = {林 晋平},
title = {Feature Locationベンチマークの現状と課題},
booktitle = {ウィンターワークショップ2013・イン・那須 論文集},
pages = {47--48},
year = 2013,
month = {jan},
}
[hayashi-wws2013]: as a page
大規模なソフトウェアに対して適切な変更やテストを実施していくためには,ソフトウェアに対する要求や,開発者が変更したい機能を実現するソースコードを特定する作業が不可欠である. このような作業を支援するために,Concept Location,Feature Location,Impact Analysis,Concern Location,Traceability Recoveryなどの解析技術が提案されている.本セッションでは,これら技術の開発における問題意識,手法の実装や評価実験における技術的な課題について議論し,効果的な技術開発の方法に関する知識を整理したい.
To maintain a large scale software, developers must identify source code where implements a requirement or a feature to be modified. Program analysis techniques including concept location, feature location, impact analysis, concern location and traceability recovery are proposed to support linking source code with requirements and features. In this session, we would like to discuss technical issues to design, implement and evaluate novel techniques in this area.
@inproceedings{ishio-wws2013,
author = {石尾 隆 and 林 晋平},
title = {セッション紹介:ソースコードと機能の対応関係を特定する技術},
booktitle = {ウィンターワークショップ2013・イン・那須 論文集},
pages = {37--38},
year = 2013,
month = {jan},
}
[ishio-wws2013]: as a page
機能捜索(feature location)はソフトウェアの機能とその実装箇所を対応づける作業であり,ソフトウェアの理解や変更のための基礎となる.既存の機能捜索手法は分析者が機能を適切に認識していることを前提としており,認識があいまいな状況では十分な効果が得られない.本稿では,機能捜索の結果を用いて機能に対する分析者の認識を更新し,次の機能捜索の入力に反映することにより,機能の捜索と識別を反復的に改善する手法を提案する.提案手法を例題のWebアプリケーションに適用し,その有効性を確認した.
Feature location (FL) is an activity of developers which identifies correspondence between software features and program elements in source code. Existing FL techniques assumes that developers adequately recognize features to be located and thus are not fully effective when their recognition is ambiguous. This paper proposes an iterative technique for refining both location and definition of features. Using a result of FL, the technique helps developers to update their recognition of features and improve the input of FL in the next iteration. An application example indicates the effectiveness of the approach.
@article{kazato-sigss201307,
author = {風戸 広史 and 林 晋平 and 小林 隆志 and 大島 剛志 and 宮田 俊介 and 夏川 勝行 and 星野 隆 and 佐伯 元司},
title = {反復型アプローチによるソフトウェア機能の捜索と識別の改善},
journal = {電子情報通信学会技術研究報告},
volume = 113,
number = 159,
pages = {55--60},
year = 2013,
month = {jul},
}
[kazato-sigss201307]: as a page
システム設計段階において,あらかじめシステムに存在するセキュリティ脅威を検出し対策を講じることによって,より安全で高信頼度のシステムを開発することができる.しかし,脅威の検出にはセキュリティに関する知識が必要であったり,見落としを極力減らす必要があることから,コストや時間がかかる.そこで本稿では,入力されたシナリオシーケンスに対し,あらかじめ知識として保持しているセキュリティ脅威パターンとの比較によって,対象システムに発生しうる脅威を検出し,ミスシナリオとして提示する手法を提案する.シナリオ記述及びセキュリティ脅威パターンの記述において,セキュリティドメインに基づくプロファイルに従ったUML シーケンス図を利用し,セキュリティ脅威パターン作成のための知識として,コモンクライテリアで定められているセキュリティターゲットを用いる.本稿では適用事例を用いて手法の有用性を確認した.
Detecting security threats of information system in design phase helps to develop secure systems. However, the more threats we try to detect, the more cost and knowledge of security threats is required. In this paper, we present a technique to detect security threats and show negative scenarios with comparing normal scenarios of a business process and the threat patterns created from knowledge of security. The scenarios of a business process are described with sequence diagrams. The knowledge is extracted from the documents called Security Target compliant to the international standard Common Criteria. We show the usefulness of our approach with several case studies.
@article{abe-sigss201305,
author = {阿部 達也 and 林 晋平 and 佐伯 元司},
title = {シーケンス図のパターンに基づくセキュリティ脅威の検出法},
journal = {電子情報通信学会技術研究報告},
volume = 113,
number = 24,
pages = {1--6},
year = 2013,
month = {may},
}
[abe-sigss201305]: as a page
本論文では, ゴール指向要求分析法において分析者の関心事に基づいたゴール間の関係の理解, および関係の修正を支援するため, 分析者の関心事をゴールの次元, 関心事同士の関係を次元にまたがるゴール間の分解関係に対応させ, ゴールグラフを多次元拡張する. 多次元ゴールグラフではゴール分解の意味が次元間の関係で表現でき, 関心事に基づいてゴールの分解関係を理解しやすくなり, また適切な関係への修正も行いやすくなる. 提案手法を支援するツールを実装し, 次元毎にゴールグラフを表示でき, 特定の関心事に関わるゴールを集中的に分析できるようにした.また要求獲得の事例にツールを適用してその有用性を評価した.
In this paper, we propose a multi-dimensional extension of goal graphs in goal-oriented requirements analysis in order to support the understanding and modifying of relations between goals. In this method, concerns of a requirements analyst and their relations respectively correspond to dimensions and the decompositions of goals. Based on the definitions of goal decompositions, analysts can understand and repair the relations between goals with their concern using the goals' dimension. Additionally, the selection of goals to analyze is supported by showing goals which only belongs to the focused dimension by the analyst. We have developed a supporting tool and evaluated the efficiency of the method in experiments.
@article{inouew-sigkbse201303,
author = {井上 渉 and 林 晋平 and 鵜飼 孝典 and 佐伯 元司},
title = {要求構造明確化のためのゴールグラフの多次元拡張},
journal = {電子情報通信学会技術研究報告},
volume = 112,
number = 496,
pages = {25--30},
year = 2013,
month = {mar},
}
[inouew-sigkbse201303]: as a page
要求仕様書は主に自然言語で記述されているため文意のあいまい性などの問題がある.要求分析者がこれらの問題点を認識し発見することが重要である.本論文ではIEEE 830で定義された品質特性をもとに,要求仕様書の文章構造と要求文の構文構造を用いて要求仕様書の問題点を検出する手法を提案する.提案手法では,要求仕様書全体と要求文の解析,さらに要求文間の関係解析を行い要求仕様書中の問題点を検出する.提案手法を自動化した問題点のチェッカーでは非あいまい性など6つの品質特性に関する問題点を提案手法により検出しマーキングを行うことで,使用者に対して問題点の発見を支援する.例題への適用および被験者実験によりチェッカーの有用性を評価した.評価の結果,チェッカーは低くない検出精度を持ち,また特に非あいまい性,検証可能性,追跡可能性について支援効果を持つことが示唆された.
Some requirements specification documents have several problems such as the ambiguity of sentences because they are mainly written in natural language.It is important for requirements analysts to find and analyze these problems.In this paper, we propose a technique for detecting problems in a requirements specification documents based on the quality characteristics defined in IEEE 830, using the syntactical structure of the specification.Our technique analyzes the structure and relationships of the sentences and the whole of the given specification.A specification checker that automates our technique can support to find the problems over the six quality characteristics.The evaluation results show that the checker has acceptable detection accuracy and the high supporting effects for especially checking unambiguity, verifiability, and traceability.
@article{aruga-sigse201303,
author = {有賀 顕 and 林 晋平 and 佐伯 元司},
title = {構文と文章構造に基づく要求仕様書の問題点発見支援},
journal = {情報処理学会研究報告},
volume = {2013-SE-179},
number = 4,
pages = {1--8},
year = 2013,
month = {mar},
}
[aruga-sigse201303]: as a page
本稿ではクラスへの責務割り当てをファジィ制約充足問題として定式化することにより自動化を図り,例題に対して解を導出した結果を示す.責務とは各クラスのインスタンスが果たすべき役割を指し,それらをクラスへ適切に割り当てることによって,高品質な設計が実現される.割り当てに際しては,疎結合かつ高凝集な割り当てが望ましいなど,様々な条件を考慮することが望ましい.しかしながら,そのような条件の間にはトレードオフがあるため,現実的には条件を適度に満たす割り当てが求められ,計算機による支援が必要となる.そこで,様々な条件をファジィ制約として表現し,責務割り当てをファジィ制約充足問題として定式化する.これによって,既存の汎用的なアルゴリズムを適用出来るようになり,解としての責務割り当ての導出が可能となる.
The authors formulate the class responsibility assignment (CRA) problem as the fuzzy constraint satisfaction problem (FCSP) to automate CRA, and show the results of automatic assignments of examples. Responsibilities are contracts or obligations of objects that they should assume; by aligning them to classes appropriately, quality designs realize. Typical conditions of a desirable design are having a low coupling between highly cohesive classes. However, because of a trade-off among such conditions, solutions that satisfy the conditions moderately are desired, and computer assistance is needed. The authors represent such conditions as fuzzy constraints, and formulate CRA as FCSP. That enables to apply common algorithms that solve FCSP to the problem, and to derive solution representing a CRA.
@article{takty-sigss201307,
author = {柳田 拓人 and 林 晋平 and 佐伯 元司 and 三村 秀典},
title = {クラス責務割り当て問題へのファジィ制約充足問題の適用},
journal = {電子情報通信学会技術研究報告},
volume = 113,
number = 159,
pages = {13--18},
year = 2013,
month = {jul},
}
[takty-sigss201307]: as a page
情報処理学会ソフトウェア工学研究会では,毎年1回参加者同士の議論を中心とした合宿形式のワークショップを開催している.2012年度は2013年1月に那須においてワークショップを開催し,例年同様活発な議論が行われた.本稿では,各テーマのセッションでの議論内容を中心に,本ワークショップについて報告する.
IPSJ Special Interest Group of Software Engineering (SIGSE) holds a workshop focusing on deep discussion among participants once a year. In the fiscal year of 2012, we had a workshop in Nasu, Tochigi prefecture, January 2013. We had a deep discussion about recent issues and future direction of software engineering. In this paper, we report each discussion held in each special theme session and the whole workshop.
@article{nnoda-sigse201307,
author = {野田 夏子 and 岡野 浩三 and 早水 公二 and 戸田 航史 and 上野 秀剛 and 石尾 隆 and 林 晋平 and 妻木 俊彦 and 中村 匡秀 and 岸 知二 and 本橋 正成 and 鷲崎 弘宜},
title = {ウィンターワークショップ2013・イン・那須報告},
journal = {情報処理学会研究報告},
volume = {2013-SE-181},
number = 11,
pages = {1--8},
year = 2013,
month = {jul},
}
[nnoda-sigse201307]: as a page
@misc{hayashi-serc-camp2013,
author = {林 晋平},
title = {ソフトウェア変更の分析(招待講演)},
howpublished = {ソフトウェア・メインテナンス研究会 第23年度全体合宿},
year = 2013,
month = {nov},
}
[hayashi-serc-camp2013]: as a page
新旧2版のプログラム要素の対応付けは,差分の抽出やプログラム要素の起源の特定など,ソフトウェア保守における様々なアクティビティの基礎技術となっている.しかし,対応付けの正しさは開発者の意図に依存するため,高精度の構造的な対応付け条件を特定することは難しく,それゆえ自動対応付け手法の出力結果に含まれる不対応や誤対応への対策が求められる.本稿では,最長共通部分列に基づくソースコード差分を対話的に修正する手法を提案する.提案手法では,差分のひとつが得られた際に,その対応付けの好ましくない箇所を分析者が指摘する.指摘に基づき,対応付けに用いる編集グラフの辺のコストを修正し,再計算を行うことにより,分析者の求める差分へ近づけていく.提案手法をツールとして実装し,その実現可能性を示した.
@inproceedings{hayashi-fose2013,
author = {林 晋平 and 丁 斌 and 加藤 哲平 and 佐伯 元司},
title = {ソースコード差分の対話的修正法},
booktitle = {ソフトウェア工学の基礎XX --- 第20回ソフトウェア工学の基礎ワークショップ予稿集},
pages = {41--46},
year = 2013,
month = {nov},
}
[hayashi-fose2013]: as a page
ソフトウェア構成管理では,ソースコード変更を開発者にとって意味のある単位ごとに分割してコミットを行うこと(分割コミット)が重要となる.しかし開発者は同時に複数の意図の変更を行うことがあり,分割コミットを行わずに様々な意図が混ざったコミットをすることがある.ソースコードの編集操作を収集し,それらをコミット前に分類することで分割コミットの支援を図る既存手法があるものの,分類を手動で行う必要があり,十分に労力の低減ができていない.本稿では複数の基準に基づき,ソースコード編集操作を自動で分類する手法を提案する.提案手法では,Javaソースコードにおけるファイルやクラス,メソッド,コメントおよび編集時間を分類の基準とし,開発者が選択する基準に基づき,編集操作を自動的に分類する.分類に必要な情報は,ソースコードの構文解析により得られた各構文要素の文字範囲情報と,編集操作の編集開始オフセットを比較することによって属性として抽出する.提案手法を実現するツールを開発し,例題へ適用することにより,提案手法が分類の労力を低減することを確認した.
@inproceedings{dhoshino-fose2013,
author = {星野 大樹 and 林 晋平 and 佐伯 元司},
title = {ソースコード編集操作の自動グループ化},
booktitle = {ソフトウェア工学の基礎XX --- 第20回ソフトウェア工学の基礎ワークショップ予稿集},
pages = {107--112},
year = 2013,
month = {nov},
}
[dhoshino-fose2013]: as a page
ソフトウェア技術者がソフトウェアシステムの要求獲得を行うためには,システムを適用する問題領域の知識が必須である.ドメインオントロジ等の問題領域の知識の明示的な記述は,要求獲得結果を完全かつ正当にすることに貢献する.ドメインオントロジ等の利用を想定した要求獲得技法はいくつか提案されている.そして,対象分野に関する文書をまとめたり,当該分野の専門家から情報を抽出することで,ドメインオントロジを作成することはできる.しかし,一般に要求獲得を行う技術者は問題分野の専門家ではないため,分野に特化した情報のみから構成されるドメインオントロジだけでは,要求獲得を漏れなく誤りなく行うことは難しい.本稿では,Webマイニングの技術を用いてドメインオントロジに技術者がドメイン知識を理解するのに有益な知識を追加し,ドメインオントロジを拡充する手法とツールを提案する.提案手法では,まず,ドメインオントロジにすでに含まれる概念を検索語として用いて,当該概念に追加すべき概念の候補群をWebから自動的に収集する.そして,既存概念毎に,既存概念との関連の深さや,Web上の文書における出現頻度や分布に基づき,候補群のランク付けを自動的に行う.これらのランク付けに基づき技術者がオントロジの拡充を行う.拡充されたオントロジが要求獲得結果の漏れのなさ,誤りの少なさを改善できることを,比較実験を通して確認して結果も示す.
Software engineers require knowledge about a problem domain when they elicit requirements for a system about the domain. Explicit descriptions about such knowledge such as domain ontology contribute to eliciting such requirements correctly and completely. Methods for eliciting requirements using ontology have been thus proposed, and such ontology is normally developed based on documents and/or experts in the problem domain. However, it is not easy for engineers to elicit requirements correctly and completely only with such domain ontology because they are not normally experts in the problem domain. In this paper, we propose a method and a tool to enhance domain ontology using Web mining. Our method and the tool help engineers to add additional knowledge suitable for them to understand domain ontology. According to our method, candidates of such additional knowledge are gathered from Web pages using keywords in existing domain ontology. The candidates are then prioritized based on the degree of the relationship between each candidate and existing ontology and on the frequency and the distribution of the candidate over Web pages. Engineers finally add new knowledge to existing ontology out of these prioritized candidates. We also show an experiment and its results for confirming enhanced ontology enables engineers to elicit requirements more completely and correctly than existing ontology does.
@article{kaiya-ipsjj201202,
author = {海谷 治彦 and 清水 悠太郎 and 安井 浩貴 and 海尻 賢二 and 林 晋平 and 佐伯 元司},
title = {要求獲得のためのオントロジをWebマイニングにより拡充する手法の提案と評価},
journal = {情報処理学会論文誌},
volume = 53,
number = 2,
pages = {495--509},
year = 2012,
month = {feb},
}
[kaiya-ipsjj201202]: as a page
本論文ではFeature Location(FL)を用いて対話的にソフトウェア機能の実装を理解する手法を提案する.既存のFL手法は理解コストの削減に寄与するものの,機能に対応するコード片特定のための入力の構築は依然として難しい.提案手法では,FLの入力は利用者とシステムとの対話により段階的に改善されていく.利用者は,FLにより発見したコード片を実際に読み,得た理解やコード片中に出現する識別子をもとに入力クエリを改善する.さらに,読んだコード片が理解に貢献したかの判断をシステムに与える適合フィードバックによりFLの評価関数を改善し,より適切な結果を得る.FLとコード片の読解,フィードバックを対話的に繰り返すことにより,利用者は効率的に機能の実装を理解する.提案手法の支援ツールを用いた事例においては,提案手法は非対話的手法に比べ理解の効率化に貢献することが分かった.
This paper proposes an interactive approach for efficiently understanding a feature implementation by applying feature location (FL). Although existing FL techniques can reduce the understanding cost, it is still an open issue to construct the appropriate inputs for the techniques. In our approach, the inputs of FL are incrementally improved by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback, obtained by partially judging whether or not a code fragment is required to understand, improves the evaluation score of FL. Users can then obtain more accurate results. We have implemented a supporting tool of our approach. Evaluation results using the tool show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach.
@article{hayashi-ipsjj201202,
author = {林 晋平 and 関根 克幸 and 佐伯 元司},
title = {Feature Locationを用いたソフトウェア機能の対話的な実装理解支援},
journal = {情報処理学会論文誌},
volume = 53,
number = 2,
pages = {578--589},
year = 2012,
month = {feb},
}
[hayashi-ipsjj201202]: as a page
ソフトウェアは,利用者を満足させ続けるために絶えず進化しなければならない.本論文では,このようなソフトウェア進化に関連する研究を,手法,対象,目的という三つの視点から分類する基準を示す.その上で,それぞれの分類に基づき文献調査を行った結果を示す.この分類と調査の結果は,ソフトウェア進化分野の研究動向や研究の方向性を考察する足掛かりとなる.
Software must be continually evolved to keep up with users’ needs. In this article, we propose a new taxonomy of software evolution. It consists of three perspectives: methods, targets, and objectives of evolution. We also present a literature review on software evolution based on our taxonomy. The result could provide a concrete baseline in discussing research trends and directions in the field of software evolution.
@article{omori-jssst201208,
author = {大森 隆行 and 丸山 勝久 and 林 晋平 and 沢田 篤史},
title = {ソフトウェア進化研究の分類と動向},
journal = {コンピュータソフトウェア},
volume = 29,
number = 3,
pages = {3--28},
year = 2012,
month = {aug},
}
[omori-jssst201208]: as a page
Requirements changes frequently occur at any time of a software development process, and their management is a crucial issue to develop software of high quality. Meanwhile, goal-oriented analysis techniques are being put into practice to elicit requirements. In this situation, the change management of goal graphs and its support are necessary. This paper presents a technique related to the change management of goal graphs, realizing impact analysis on a goal graph when its modifications occur. Our impact analysis detects conflicts that arise when a new goal is added, and investigates the achievability of the other goals when an existing goal is deleted. We have implemented a supporting tool for automating the analysis. Two case studies suggested the efficiency of the proposed approach.
@article{hayashi-ieicet201204,
author = {Shinpei Hayashi and Daisuke Tanabe and Haruhiko Kaiya and Motoshi Saeki},
title = {Impact Analysis on an Attributed Goal Graph},
journal = {IEICE Transactions on Information and Systems},
volume = {E95-D},
number = 4,
pages = {1012--1020},
year = 2012,
month = {apr},
}
[hayashi-ieicet201204]: as a page
ソフトウェアの要求獲得は,ステークホルダによる協調作業である.プロジェクトマネージャや分析者にとって,ステークホルダの関心事を理解し,ステークホルダの偏りや不足などの潜在的な問題を知っておくことは重要である.本稿では,ゴール指向分析手法の1つであるAGORAの成果物である属性つきゴールグラフを対象に,要求分析作業中にステークホルダと,ステークホルダのシステム品質に関する関心事の関係をアンカーマップを使って可視化した.また,この手法を利用して,システムの信頼性,効率性,使用性などのシステム品質に関する重要な関心事をもれなく獲得するためにステークホルダの不足や偏りを発見することを支援するツールを開発した.このツールは,属性つきゴールグラフから自動的に,ステークホルダと品質に関する関心事の関係を抽出し,可視化する.さらに,実装したツールを用いた評価実験により,ステークホルダの偏りや不足を同定するのに,既存によく使われているステークホルダと要求の対応表よりも短時間で,正しくできることを示した.
Software requirements elicitation is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance or lack of stakeholders. This paper presents a technique and a tool which visualize the strength of stakeholders' interest of concerns on two dimensional screens. The tool generates anchored maps from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has stakeholders' interest to concerns and its degree as the attributes of goals. Additionally an experimental evaluation is described, whose results show the user of the tool could identify imbalance and lack of stakeholders more accurately in shorter time than the case with a table of stakeholders and requirements.
@article{ugai-ipsjj201204,
author = {鵜飼 孝典 and 林 晋平 and 佐伯 元司},
title = {要求獲得におけるステークホルダの偏りと不足を検出する可視化ツール},
journal = {情報処理学会論文誌},
volume = 53,
number = 4,
pages = {1448--1460},
year = 2012,
month = {apr},
}
[ugai-ipsjj201204]: as a page
This paper proposes structured location, a semiautomatic technique and its supporting tool both for locating features and exposing their structures in source code, using a combination of dynamic analysis, sequential pattern mining and formal concept analysis.
@inproceedings{kazato-icpc2012,
author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
title = {Toward Structured Location of Features},
booktitle = {Proceedings of the 20th IEEE International Conference on Program Comprehension},
pages = {255--256},
year = 2012,
month = {jun},
}
[kazato-icpc2012]: as a page
We propose a method to explore how to improve business by introducing information systems. We use a meta-modeling technique to specify the business itself and its metrics. The metrics are defined based on the structural information of the business model, so that they can help us to identify whether the business is good or not with respect to several different aspects. We also use a model transformation technique to specify an idea of the business improvement. The metrics help us to predict whether the improvement idea makes the business better or not. We use strategic dependency (SD) models in i* to specify the business, and attributed graph grammar (AGG) for the model transformation.
@inproceedings{kaiya-caise2012,
author = {Haruhiko Kaiya and Shunsuke Morita and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
title = {Facilitating Business Improvement by Information Systems using Model Transformation and Metrics},
booktitle = {Proceedings of the CAiSE'12 Forum at the 24th International Conference on Advanced Information Systems Engineering},
pages = {106--113},
year = 2012,
month = {jun},
}
[kaiya-caise2012]: as a page
When information systems are introduced in a social setting such as a business, the systems will give bad and good impacts on stakeholders in the setting. Requirements analysts have to predict such impacts in advance because stakeholders cannot decide whether the systems are really suitable for them without such prediction. In this paper, we propose a method based on model transformation patterns for introducing suitable information systems. We use metrics of a model to predict whether a system introduction is suitable for a social setting. Through a case study, we show our method can avoid an introduction of a system, which was actually bad for some stakeholders. In the case study, we use a strategic dependency model in i* to specify the model of systems and stakeholders, and attributed graph grammar for model transformation. We focus on the responsibility and the satisfaction of stakeholders as the criteria for suitability about systems introduction in this case study.
@inproceedings{kaiya-apsec2012,
author = {Haruhiko Kaiya and Shunsuke Morita and Shinpei Ogata and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
title = {Model Transformation Patterns for Introducing Suitable Information Systems},
booktitle = {Proceedings of the 19th Asia-Pacific Software Engineering Conference},
pages = {434--439},
year = 2012,
month = {dec},
}
[kaiya-apsec2012]: as a page
Locating features in software composed of multiple layers is a challenging problem because we have to find program elements distributed over layers, which still work together to constitute a feature. This paper proposes a semi-automatic technique to extract correspondence between features and program elements among layers. By merging execution traces of each layer to feed into formal concept analysis, collaborative program elements are grouped into formal concepts and tied with a set of execution scenarios. We applied our technique to an example of web application composed of three layers. The result indicates that our technique is not only feasible but promising to promote program understanding in a more realistic context.
@inproceedings{kazato-csmr2012,
author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
title = {Feature Location for Multi-Layer System Based on Formal Concept Analysis},
booktitle = {Proceedings of the 16th European Conference on Software Maintenance and Reengineering},
pages = {429--434},
year = 2012,
month = {mar},
}
[kazato-csmr2012]: as a page
This paper proposes a concept for refactoring an edit history of source code and a technique for its automation. The aim of our history refactoring is to improve the clarity and usefulness of the history without changing its overall effect. We have defined primitive history refactorings including their preconditions and procedures, and large refactorings composed of these primitives. Moreover, we have implemented a supporting tool that automates the application of history refactorings in the middle of a source code editing process. Our tool enables developers to pursue some useful applications using history refactorings such as task level commit from an entangled edit history and selective undo of past edit operations.
@inproceedings{hayashi-icsm2012,
author = {Shinpei Hayashi and Takayuki Omori and Teruyoshi Zenmyo and Katsuhisa Maruyama and Motoshi Saeki},
title = {Refactoring Edit History of Source Code},
booktitle = {Proceedings of the 28th IEEE International Conference on Software Maintenance},
pages = {617--620},
year = 2012,
month = {sep},
}
[hayashi-icsm2012]: as a page
Change-aware development environments have recently become feasible and reasonable. These environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed. Therefore, they often skip several code changes of no interest. This skipping action is an obstacle that makes many programmers hesitate in using existing replaying tools. This paper proposes a slicing mechanism that can extract only code changes necessary to construct a particular class member of a Java program from the whole history of past code changes. In this mechanism, fine-grained code changes are represented by edit operations recorded on source code of a program. The paper also presents a running tool that implements the proposed slicing and replays its resulting slices. With this tool, programmers can avoid replaying edit operations nonessential to the construction of class members they want to understand.
@incollection{maruyama-ase2012,
author = {Katsuhisa Maruyama and Eijiro Kitsu and Takayuki Omori and Shinpei Hayashi},
title = {Slicing and Replaying Code Change History},
booktitle = {Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering},
pages = {246--249},
year = 2012,
month = {sep},
}
[maruyama-ase2012]: as a page
This paper proposes a technique for locating the implementation of features by combining techniques of a graph cut and a formal concept analysis based on methods and scenarios.
@inproceedings{kato-iwesep2012,
author = {Teppei Kato and Shinpei Hayashi and Motoshi Saeki},
title = {Cutting a Method Call Graph for Supporting Feature Location},
booktitle = {Proceedings of the 4th International Workshop on Empirical Software Engineering in Practice},
pages = {55--57},
year = 2012,
month = {oct},
}
[kato-iwesep2012]: as a page
本稿では,プログラム理解の際の開発者のコンテキストを細粒度で採取し,またそれらを対象のプログラム理解の支援に利用する手法について議論する.
This paper discusses a technique for gathering fine-grained context of program understanding and using them for supporting the understanding activity.
@inproceedings{hayashi-wws2012,
author = {林 晋平},
title = {細粒度のプログラム理解コンテキストの採取と利用について},
booktitle = {ウィンターワークショップ2012・イン・琵琶湖 論文集},
pages = {91--92},
year = 2012,
month = {jan},
}
[hayashi-wws2012]: as a page
著者らが実施したソフトウェア進化研究に関する動向調査について紹介する.本稿ではまず,ソフトウェア進化研究を分類するための新たな基準を提案する.この分類基準では,進化研究を,手法,対象,目的の三つの視点から捉える.ついで,この基準を国際ワークショップであるIWPSEシリーズで発表された文献に適用した分類結果を示す.さらに,この結果から進化研究の動向について考察する.
We have recently carried out a comprehensive literature review on software evolution. In the review process, we have proposed and adopted a new taxonomy of software evolution research to classify research papers. In this paper we explain our software evolution taxonomy which consists of three perspectives: methods, targets, and objectives. We also discuss research trends on software evolution based on a classification result for papers published in the series of IWPSE proceedings.
@article{omori-sigss201203,
author = {大森 隆行 and 丸山 勝久 and 林 晋平 and 沢田 篤史},
title = {ソフトウェア進化研究に関する動向調査 -- IWPSEシリーズを題材に --},
journal = {電子情報通信学会技術研究報告},
volume = 111,
number = 481,
pages = {121--126},
year = 2012,
month = {mar},
}
[omori-sigss201203]: as a page
プログラムを変更する前に,開発者はまずFeature Locationにより機能に対応するソースコード上の実装箇所を特定し,続いてその箇所に関する静的構造や振る舞いを理解する.本稿では,実行トレースに系列マイニング,形式概念分析を組み合わせて適用することによって,機能の実装箇所を特定するだけではなく,その箇所の構造を半自動的に特定する手法を提案する.提案手法の支援ツールを試作し,Webアプリケーションの例題に適用した結果,提案手法が実現可能であり,また単に機能の実装箇所を特定するよりも理解に役立つことを確認した.
After a feature is located in source code, developers understand implementation structure around the location from static and/or behavioral point of view. This paper proposes a semi-automatic technique both for locating features and exposing their implementation structures in source code, using a combination of dynamic analysis and two data analysis techniques, sequential pattern mining and formal concept analysis. We have implemented our technique in a supporting tool and applied it to an example of a web application. The result shows that the proposed technique is not only feasible but helpful to understand implementation of features just after they are located.
@article{kazato-sigss201207,
author = {風戸 広史 and 林 晋平 and 岡田 敏 and 宮田 俊介 and 星野 隆 and 佐伯 元司},
title = {ソフトウェアの機能に対応する実装構造の抽出と可視化手法の提案},
journal = {電子情報通信学会技術研究報告},
volume = 112,
number = 164,
pages = {91--96},
year = 2012,
month = {jul},
}
[kazato-sigss201207]: as a page
複数のレイヤで構成されたソフトウェアでは,レイヤ間に分散したプログラム要素が協調動作して一つの機能を実現するために,機能とプログラム要素群を対応づける作業であるFeature Locationが難しい.そこで,本稿では機能とレイヤ間に分散したプログラム要素群の対応関係を半自動的に抽出する手法を提案する.提案手法ではレイヤごとの実行トレースを併合して形式概念分析への入力として用いることにより,協調動作するプログラム要素を形式概念としてグループ化し,実行トレースの集合と結びつける.Webアプリケーションの例題に提案手法を適用し,3つのレイヤに分散したプログラム要素と機能の対応関係を抽出した事例を示すことにより,手法の実現可能性を示すとともに,現実的なプログラム理解の支援に向けた応用可能性について議論する.
In multi-layer systems such as web applications, locating features is a challenging problem because each feature is often realized through a collaboration of program elements belonging to different layers. This paper proposes a semi-automatic technique to extract correspondence between features and program elements among layers, by merging execution traces of every layer to feed into formal concept analysis. To show the feasibility of our technique, we applied it to a web application which conforms to the typical three-layer architecture of Java EE and discuss its applicability to other layer systems in the real world.
@article{kazato-sigss201203,
author = {風戸 広史 and 林 晋平 and 岡田 敏 and 宮田 俊介 and 星野 隆 and 佐伯 元司},
title = {多層システムのための形式概念分析に基づくFeature Location手法の提案},
journal = {電子情報通信学会技術研究報告},
volume = 111,
number = 481,
pages = {139--144},
year = 2012,
month = {mar},
}
[kazato-sigss201203]: as a page
2012年9月にドイツ・エッセンにて開催された第27回ソフトウェア工学の自動化国際会議(ASE 2012)に参加したので,取り上げられた内容を報告し,参加と運営の両方の観点から我々の見解を述べる.
This paper gives our views on the 27th IEEE/ACM International Conference on Automated Software Engineering (ASE 2012) held at Essen, Germany on September 3-7, 2012 with the perspectives of both participation and organization.
@article{hayashi-sigss201211,
author = {林 晋平 and 丸山 勝久 and 佐伯 元司},
title = {第27回ソフトウェア工学の自動化国際会議(ASE 2012)参加報告},
journal = {電子情報通信学会技術研究報告},
volume = 112,
number = 275,
pages = {75--80},
year = 2012,
month = {nov},
}
[hayashi-sigss201211]: as a page
本稿では,2011年12月5日から8日まで,ベトナムのホーチミン市にて開催された第18回アジア太平洋ソフトウェア工学国際会議(APSEC2011)について紹介する.
This paper gives our views on the 18th Asia-Pacific Software Engineering Conference (APSEC2011) held in Ho Chi Minh City, Vietnam on December 5-8, 2011.
@article{omori-sigse201203,
author = {大森 隆行 and 大山 勝徳 and 林 晋平 and 青山 幹雄},
title = {第18回アジア太平洋ソフトウェア工学国際会議(APSEC2011)参加報告},
journal = {情報処理学会研究報告},
volume = {2012-SE-178},
number = 12,
pages = {1--8},
year = 2012,
month = {mar},
}
[omori-sigse201203]: as a page
2012年1月19日,20日の2日間に,琵琶湖コンファレンスセンター(滋賀県彦根市)にて開催したウィンターワークショップ2011・イン琵琶湖(WW2012)の概要について報告する.
This paper reports on ``Winter Workshop 2012 in Biwako(WW2012)'', which was held at Biwako Conference Center in Hikone, Shiga from January 19 through 20, 2012.
@article{maruyama-sigse201211,
author = {丸山 勝久 and 大森 隆行 and 井垣 宏 and 中村 匡秀 and 伏田 享平 and 角田 雅照 and 風戸 広史 and 岡田 譲二 and 岡野 浩三 and 坂本 一憲 and 本橋 正成 and 岸 知二 and 野田 夏子 and 小林 隆志 and 林 晋平},
title = {ウィンターワークショップ2012・イン・琵琶湖開催報告},
journal = {情報処理学会研究報告},
volume = {2012-SE-175},
number = 11,
pages = {1--8},
year = 2012,
month = {nov},
}
[maruyama-sigse201211]: as a page
@misc{kazato-fose2012,
author = {風戸 広史 and 大島 剛志 and 石尾 隆 and 林 晋平},
title = {Feature Location技術を開発者は使いこなせるか},
howpublished = {In 第19回ソフトウェア工学の基礎ワークショップ},
year = 2012,
month = {dec},
}
[kazato-fose2012]: as a page
@misc{hayashi-fose2012,
author = {林 晋平 and 大森 隆行 and 善明 晃由 and 丸山 勝久 and 佐伯 元司},
title = {Historef:編集履歴リファクタリングの支援ツール},
howpublished = {In 第19回ソフトウェア工学の基礎ワークショップ},
year = 2012,
month = {dec},
}
[hayashi-fose2012]: as a page
Object Constraint Language (OCL) is frequently applied in software development for stipulating formal constraints on software models. Its platform-independent characteristic allows for wide usage during the design phase. However, application in platform-specific processes, such as coding, is less obvious because it requires usage of bespoke tools for that platform. In this paper we propose an approach to generate assertion code for OCL constraints for multiple platform specific languages, using a unified framework based on structural similarities of programming languages. We have succeeded in automating the process of assertion code generation for four different languages using our tool. To show effectiveness of our approach in terms of development effort, an experiment was carried out and summarised.
@article{rodion-ieicet201103,
author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki},
title = {Using Hierarchical Transformation to Generate Assertion Code from OCL Constraints},
journal = {IEICE Transactions on Information and Systems},
volume = {E94-D},
number = 3,
pages = {612--621},
year = 2011,
month = {mar},
}
[rodion-ieicet201103]: as a page
Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if refactoring is applied between those versions, then the source code differences are more complicated, and understanding them becomes more difficult. Although many techniques for extracting refactoring effects from the differences have been studied, it is necessary to exclude the extracted refactorings' effects and reconstruct the differences for meaningful and understandable ones with no refactoring effect. As described in this paper, we propose a novel technique to address this difficulty. Using our technique, we extract the refactoring effects and then apply them to the old version of source code to produce the differences without refactoring effects. We also implemented a support tool that helps separate refactorings automatically. An evaluation of open source software showed that our tool is applicable to all target refactorings. Our technique is therefore useful in real situations. Evaluation testing also demonstrated that the approach reduced the code differences more than 21\%, on average, and that developers can understand more changes from the differences using our approach than when using the original one in the same limited time.
@inproceedings{zui-apsec2011,
author = {Sirinut Thangthumachit and Shinpei Hayashi and Motoshi Saeki},
title = {Understanding Source Code Differences by Separating Refactoring Effects},
booktitle = {Proceedings of the 18th Asia Pacific Software Engineering Conference},
pages = {339--347},
year = 2011,
month = {dec},
}
[zui-apsec2011]: as a page
Although a responsibility driven approach in object oriented analysis and design methodologies is promising, the assignment of the identified responsibilities to classes (simply, class responsibility assignment: CRA) is a crucial issue to achieve design of higher quality. The GRASP by Larman is a guideline for CRA and is being put into practice. However, since it is described in an informal way using a natural language, its successful usage greatly relies on designers' skills. This paper proposes a technique to represent GRASP formally and to automate appropriate CRA based on them. Our computerized tool automatically detects inappropriate CRA and suggests alternatives of appropriate CRAs to designers so that they can improve a CRA based on the suggested alternatives. We made preliminary experiments to show the usefulness of our tool.
@inproceedings{akiyama-models2011,
author = {Motohiro Akiyama and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
title = {Supporting Design Model Refactoring for Improving Class Responsibility Assignment},
booktitle = {Proceedings of the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems},
pages = {455--469},
year = 2011,
month = {oct},
}
[akiyama-models2011]: as a page
本稿では, 属性つきゴールグラフの品質向上を支援するツールを提案し, このツールの有効性を示す実験について述べる. ゴールに対する品質特性を, IEEE Std 830に記された要求仕様書が備えるべき品質特性に準じて, 定義した. 支援ツールでは, これらの品質特性を満たさないゴールをユーザに示す. 検証実験により, 調査すべき範囲のしぼり込みが短時間で行なえるため, 効率的に作業が行なえることが分かり, ツールの有効性を示すことができた.
In this article, a supporting tool to high-quality goal graphs is proposed and the result of an experiment is described. The tool highlights the goals which do not satisfy quality properties. The quality properties are defined based on IEEE Std 830. The result of an experiment shows that users can modify goal graphs in a shorter time because they can narrow down the area to focus.
@inproceedings{ugai-wws2011,
author = {鵜飼 孝典 and 林 晋平 and 佐伯 元司},
title = {ゴールグラフの品質向上支援ツール},
booktitle = {ウィンターワークショップ2011・イン・修善寺 論文集},
pages = {39--40},
year = 2011,
month = {jan},
}
[ugai-wws2011]: as a page
本稿では,後の理解や利用が容易となるよう変更をその効果を変えないまま再構成する,変更履歴のリファクタリング手法について議論する.
This paper discusses techniques for {\it refactoring change histories} for improving understandability and usability of them without modifying their effects.
@inproceedings{hayashi-wws2011,
author = {林 晋平},
title = {変更履歴のリファクタリングに向けて},
booktitle = {ウィンターワークショップ2011・イン・修善寺 論文集},
pages = {21--22},
year = 2011,
month = {jan},
}
[hayashi-wws2011]: as a page
オブジェクト指向開発では,設計にあたって各クラスに適切に責務を割り当てることにより,後の開発工程をより円滑に進めることができる.適切な責務割り当ての基準としてGRASPなどの指針があるが,その適用方針は定かではなく,依然として責務割り当ては困難である.これを解決するため,責務割り当てが適切でない箇所を自動的に検出して改善案を作業者に提示することにより,適切な責務割り当ての実現を支援する手法を提案する.提案手法では,責務の主な処理やその対象に関する情報を細分化する責務記述形式と,それに基づく責務割り当ての代替案の提示規則を定義する.代替案の提示規則はGRASPに基づいている.作業者は,提示された代替案に基づいて責務割り当てを洗練し,最終的な責務割り当てを決定する.提案手法を実現するツールを実装し,予備実験を行ったところ,提案手法によって高品質な責務割り当てが実現できる傾向があることを確認した.
In object-oriented design, Class Responsibility Assignment (CRA) is important. However, it is not easy to detect where to improve CRA though there are some principles of CRA such as GRASP. In this paper, we propose a technique for supporting achievement of appropriate CRA by detecting inappropriate CRA automatically and suggesting candidates of appropriate CRAs to designers. In the technique, we have defined a responsibility description form which information included in a responsibility can be separately specified. Moreover, recommendation rules of alternatives of CRA based on GRASP are defined. Responsibility descriptions are analyzed with these rules. A designer improves a CRA based on suggested alternatives and achieves a more appropriate CRA. We have implemented a tool supporting the proposed technique and validated its usefulness by preliminary experiments.
@article{akiyama-sigss201103,
author = {秋山 幹博 and 林 晋平 and 小林 隆志 and 佐伯 元司},
title = {責務記述に基づくクラスの責務割り当て支援},
journal = {電子情報通信学会技術研究報告},
volume = 110,
number = 458,
pages = {73--78},
year = 2011,
month = {mar},
}
[akiyama-sigss201103]: as a page
本稿では, 属性つきゴールグラフの品質向上を支援するツールを提案し, このツールの有効性を示す実験について述べる. ゴールに対する品質特性を, IEEE Std 830に記された要求仕様書が備えるべき品質特性に準じて, 定義した. 支援ツールでは, これらの品質特性を満たさないゴールをユーザに提示する. 検証実験により, ユーザが考慮すべき範囲のしぼり込みが短時間で行なえるため, 効率的に作業が行なえることが分かり, ツールの有効性を示すことができた.
In this article, a supporting tool to develop high-quality goal graphs is proposed and the result of an experiment is described. The tool highlights the goals which do not satisfy quality properties. The quality properties are defined based on IEEE Std 830. The result of an experiment shows that users can modify goal graphs rapidly because they can focus on parts that were not satisfy the quality properties.
@article{ugai-sigkbse201103,
author = {鵜飼 孝典 and 林 晋平 and 佐伯 元司},
title = {ゴールグラフの品質向上支援ツールとその評価},
journal = {電子情報通信学会技術研究報告},
volume = 110,
number = 468,
pages = {1--6},
year = 2011,
month = {mar},
}
[ugai-sigkbse201103]: as a page
ソフトウェア開発は, ステークホルダによる協調作業である. プロジェクトマネージャや分析者にとって, ステークホルダの関心事を理解し, ステークホルダの偏りや不足などの潜在的な問題を知っておくことは重要である. 我々は, システムの信頼性, 効率性, 使用性等のシステム品質に関する重要な関心事をもれなく獲得するためにステークホルダの不足や偏りを発見することを支援するツールを提案している. 提案ツールでは, ゴール指向分析手法の一つであるAGORA の成果物であるゴールグラフを対象に, 要求分析作業中にステークホルダと, ステークホルダのシステム品質に関する関心事の関係をアンカーマップを使って可視化する. 本論では, ステークホルダとステークホルダの関心事の関係を可視化するツールの実装と, その評価実験について, 少ないサンプルではあるが, 良好な結果が得られたことを述べる.
Software development is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance of stakeholders or lack of stakeholders. A tool which visualizes the strength of stakeholders' interest of concern on two dimensional screens has been proposed. The proposed tool generates an anchored map from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has information on stakeholders' interest to concerns and its degree as the attributes of goals. In this paper, an integrated implementation of an anchored map viewer and an goal graph editor is shown and experimental evaluation is described. Results show the tool's usefulness.
@article{ugai-siggn201103,
author = {鵜飼 孝典 and 林 晋平 and 佐伯 元司},
title = {ソフトウェア開発会議におけるステークホルダと関心事の可視化ツール},
journal = {情報処理学会研究報告},
volume = {2011-GN-79},
number = 16,
pages = {1--8},
year = 2011,
month = {mar},
}
[ugai-siggn201103]: as a page
Addressing the challenges of developing secure software systems remains an active research area in software engineering. Current research efforts have resulted in the documentation of recurring security problems as security patterns. Security patterns provide encapsulated solutions to specific security problems and can be used to build secure systems by designers with little knowledge of security. Despite this benefit, there is lack of work that focus on evaluating the capabilities of security analysis approaches for their support in incorporating security analysis patterns. This chapter presents evaluation results of a study we conducted to examine the extent to which constructs provided by security requirements engineering approaches can support the use of security patterns as part of the analysis of security problems. To achieve this general objective, the authors used a specific security pattern and examined the challenges of representing this pattern in some security modeling approaches. The authors classify the security modeling approaches into two categories: problem and solution and illustrate their capabilities with a well-known security patterns and some practical security examples. Based on the specific security pattern they have used our evaluation results suggest that current approaches to security engineering are, to a large extent, capable of incorporating security analysis patterns.
@incollection{armstrong-sess,
author = {Armstrong Nhlabatsi and Arosha Bandara and Shinpei Hayashi and Charles B. Haley and Jan Jurjens and Haruhiko Kaiya and Atsuto Kubo and Robin Laney and Haralambos Mouratidis and Bashar Nuseibeh and Thein T. Tun and Hironori Washizaki and Nobukazu Yoshioka and Yijun Yu},
title = {Security Patterns: Comparing Modeling Approaches},
booktitle = {Software Engineering for Secure Systems: Industrial and Research Perspectives},
pages = {75--111},
publisher = {IGI Global},
year = 2011,
month = {jan},
}
[armstrong-sess]: as a page
本稿では,ソースコード編集履歴のリファクタリング手法を提案する.ソフトウェア開発では,ソースコード本体のみならず,その編集の履歴もさまざまに利用されるため,適切に管理された履歴が後段のプロセスに貢献する.しかし,現実には複雑に絡み合った不適切な履歴が生じうる.我々のアプローチでは,編集履歴を,理解性や利用性を向上させるために,その編集内容を変えないよう書き換えるリファクタリングを行う.我々は4つの基本的なリファクタリング及びそれらを組み合わせた2つの大きなリファクタリングを,それらの事前条件も含めて定義した.また,定義したリファクタリングを自動化しコードエディタに組み込んだ支援ツールを実現した.支援ツールを用いることにより,実際に編集履歴のリファクタリングが行えること,またリファクタリングにより分割コミットや変更取り消しなどの有益な応用が可能となることを示す.
@incollection{hayashi-fose2011a,
author = {林 晋平 and 大森 隆行 and 善明 晃由 and 丸山 勝久 and 佐伯 元司},
title = {ソースコード編集履歴のリファクタリング手法},
booktitle = {ソフトウェア工学の基礎XVIII --- 第18回ソフトウェア工学の基礎ワークショップ予稿集},
pages = {61--70},
year = 2011,
month = {nov},
}
[hayashi-fose2011a]: as a page
プログラマが行った編集操作を記録し,それらを自由に再生可能な環境を提供することで,プログラムの時系列的変化を容易に調査できるようになってきた.しかしながら,あるプログラム要素に関係する編集操作が特定の時間帯に集中しているとは限らない.このため,編集操作の再生において,関心のない編集操作を飛ばすための早送りや巻き戻しが繰り返し発生する.本論文では,編集操作履歴全体から特定のメソッドやフィールドに関係のある編集操作だけを抜き出すスライシング手法と,抜き出した編集操作だけを再生するツールを提案する.このツールを用いることで,編集操作を再生する際に無駄な早送りや巻き戻しを削減することができ,プログラム理解の効率的な実施が期待できる.
@incollection{maruyama-fose2011,
author = {丸山 勝久 and 木津 栄二郎 and 大森 隆行 and 林 晋平},
title = {プログラム理解支援を目的とした編集操作スライスとその再生},
booktitle = {ソフトウェア工学の基礎XVIII --- 第18回ソフトウェア工学の基礎ワークショップ予稿集},
pages = {121--126},
year = 2011,
month = {nov},
}
[maruyama-fose2011]: as a page
@inproceedings{inouew-ncipsj2011,
author = {井上 渉 and 鵜飼 孝典 and 林 晋平 and 佐伯 元司},
title = {質問回答形式を用いた機能要求記述の解釈の齟齬の検出支援},
booktitle = {情報処理学会 第73回全国大会講演論文集},
pages = {425--426},
year = 2011,
month = {mar},
}
[inouew-ncipsj2011]: as a page
@misc{hayashi-fose2011b,
author = {林 晋平 and 小林 隆志},
title = {ソフトウェア工学勉強会への誘い},
howpublished = {In 第18回ソフトウェア工学の基礎ワークショップ},
year = 2011,
month = {nov},
}
[hayashi-fose2011b]: as a page
本論文では,多量のソフトウェア関連データを用いたソフトウェアの構築・保守支援手法及びそのために必要なデータマイニング技術の動向を,既存の研究を概観しつつ紹介する.
This paper discusses recent studies on technologies for supporting software construction and maintenance by analyzing various software engineering data. We also introduce typical data mining techniques for analyzing the data.
@article{tkobaya-jssst201008,
author = {小林 隆志 and 林 晋平},
title = {データマイニング技術を応用したソフトウェア構築・保守支援の研究動向},
journal = {コンピュータソフトウェア},
volume = 27,
number = 3,
pages = {13--23},
year = 2010,
month = {aug},
}
[tkobaya-jssst201008]: as a page
ソフトウェアの実行基盤を構成する個々の実装技術が,全体的な品質特性にどのような影響を及ぼすかを予測することは難しい.本稿では品質特性と実装技術の因果関係をベイジアンネットワークでモデル化し,実装技術の選択を支援する手法を提案する.また,ベイジアンネットワークの検証ツール上に提案手法を実装し,例題へ適用することによりその有効性を示す.
It is difficult to estimate how a combination of implementation technologies influences quality attributes on an entire system. In this paper, we propose a technique to choose implementation technologies by modeling casual dependencies between requirements and technoloies probabilistically using Bayesian networks. We have implemented our technique on a Bayesian network tool and applied it to a case study of a business application to show its effectiveness.
@article{kazato-ipsjj201009,
author = {風戸 広史 and 林 晋平 and 小林 隆志 and 佐伯 元司},
title = {ベイジアンネットワークを用いたソフトウェア実装技術の選択支援},
journal = {情報処理学会論文誌},
volume = 51,
number = 9,
pages = {1765--1776},
year = 2010,
month = {sep},
}
[kazato-ipsjj201009]: as a page
This paper proposes a technique for detecting the occurrences of refactoring from source code revisions. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that detecting refactorings from the differences between two versions stored in a software version archive is not usually an easy process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition between two states. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. Through case studies, we show that our approach is feasible to detect combinations of refactorings.
@article{hayashi-ieicet201004,
author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki},
title = {Search-Based Refactoring Detection from Source Code Revisions},
journal = {IEICE Transactions on Information and Systems},
volume = {E93-D},
number = 4,
pages = {754--762},
year = 2010,
month = {apr},
}
[hayashi-ieicet201004]: as a page
本稿では,ソフトウェア開発プロジェクトの方針に基づいて,各開発者による変更案の選択を支援する手法を提案する.提案手法では,開発者個人の主観による影響を抑制するために,複数のソフトウェアメトリクスを統合した評価関数によって変更の選択肢の優劣を判断する.また,プロジェクトの方針に基づいた選択を実現するために,ソースコードに対する変更の選択を複数のメトリクスを評価項目とする多目的意思決定とみなすことにより,階層分析法を応用して評価関数の作成を行う.予備評価においては,提案手法は変更の選択に有用であった.
This paper proposes a technique for selecting the most appropriate alternative of source code changes based on the commitment of a software development project by each developer of the project. In the technique, we evaluate the alternative changes by using an evaluation function with integrating multiple software metrics to suppress the influence of each developer’s subjectivity. By regarding the selection of the alternative changes as a multiple criteria decision making, we create the function with Analytic Hierarchy Process. A preliminary evaluation shows the efficiency of the technique.
@article{hayashi-jssst201005,
author = {林 晋平 and 佐々木 祐輔 and 佐伯 元司},
title = {階層分析法を応用したソースコード変更案の評価},
journal = {コンピュータソフトウェア},
volume = 27,
number = 2,
pages = {118--123},
year = 2010,
month = {may},
}
[hayashi-jssst201005]: as a page
This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, utilizing domain ontologies for goal graph construction, detecting various types of conflicts among goals, prioritizing goals, analyzing impacts when modifying a goal graph, and version control of goal graphs.
@inproceedings{saeki-qsic2010,
author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
title = {An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and Its Implementation},
booktitle = {Proceedings of the 10th International Conference on Quality Software},
pages = {357--360},
year = 2010,
month = {jul},
}
[saeki-qsic2010]: as a page
We propose an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with sentences describing a feature, the relations between source code structures and problem domains are important. We model the knowledge of the problem domains as domain ontologies having concepts of the domains and their relations. Using semantic relations on the ontologies in addition to method invocation relations and the similarity between an identifier on the code and words in the sentences, we locate the code fragments corresponding to the given sentences. Additionally, our prioritization mechanism which orders the located results of code fragments based on the ontologies enables users to select and analyze the results effectively. To show effectiveness of our approach in terms of accuracy, a case study was carried out with our proof-ofconcept tool and summarized.
@inproceedings{hayashi-apsec2010,
author = {Shinpei Hayashi and Takashi Yoshikawa and Motoshi Saeki},
title = {Sentence-to-Code Traceability Recovery with Domain Ontologies},
booktitle = {Proceedings of the 17th Asia Pacific Software Engineering Conference},
pages = {385--394},
year = 2010,
month = {nov},
}
[hayashi-apsec2010]: as a page
We propose iFL, an interactive environment that is useful for effectively understanding feature implementation by application of feature location (FL). With iFL, the inputs for FL are improved incrementally by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback obtained by partially judging whether or not a fragment is relevant improves the evaluation score of FL. Users can then obtain more accurate results. Case studies with iFL show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach.
@inproceedings{hayashi-icsm2010,
author = {Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki},
title = {{iFL}: An Interactive Environment for Understanding Feature Implementations},
booktitle = {Proceedings of the 26th IEEE International Conference on Software Maintenance},
pages = {1--5},
year = 2010,
month = {sep},
}
[hayashi-icsm2010]: as a page
Software development is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance of stakeholders or lack of stakeholders. This paper presents a tool which visualizes the strength of stakeholders' interest of concern on two dimensional screens. The proposed tool generates an anchored map from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has information on stakeholders' interest to concerns and its degree as the attributes of goals. Results from the case study are that (1) some concerns are not connected to any stakeholders and (2) a type of stakeholders is interested in different concerns each other. The results suggest that lack of stakeholders for the unconnected concerns and need that a type of stakeholders had better to unify their requirements.
@inproceedings{ugai-rev2010,
author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki},
title = {Visualizing Stakeholder Concerns with Anchored Map},
booktitle = {Proceedings of the 5th International Workshop on Requirements Engineering Visualization},
pages = {20--24},
year = 2010,
month = {sep},
}
[ugai-rev2010]: as a page
This paper proposes a formalized technique for generating finer-grained source code deltas according to a developer's editing intentions. Using the technique, the developer classifies edit operations of source code by annotating the time series of the edit history with the switching information of their editing intentions. Based on the classification, the history is sorted and converted automatically to appropriate source code deltas to be committed separately to a version repository. This paper also presents algorithms for automating the generation process and a prototyping tool to implement them.
@inproceedings{hayashi-iwpse-evol2010,
author = {Shinpei Hayashi and Motoshi Saeki},
title = {Recording Finer-Grained Software Evolution with {IDE}: An Annotation-Based Approach},
booktitle = {Proceedings of the 4th International Joint ERCIM/IWPSE Symposium on Software Evolution},
pages = {8--12},
year = 2010,
month = {sep},
}
[hayashi-iwpse-evol2010]: as a page
@incollection{tkobaya-fose2010,
author = {小林 隆志 and 林 晋平},
title = {データマイニング技術を応用したソフトウェア構築・保守支援},
booktitle = {ソフトウェア工学の基礎XVII --- 第17回ソフトウェア工学の基礎ワークショップ予稿集},
pages = {1--2},
year = 2010,
month = {nov},
}
[tkobaya-fose2010]: as a page
本稿では,ソースコードの編集履歴を開発者の編集意図ごとの集まりに再構成する手法を提案する.提案手法では,時系列に基づき記録したソースコード編集履歴に意図の切替の注釈を加えることによりこれを行う.
This paper proposes a technique to reorganize source code changes on a developer's intentions. The technique is done by annotating the time series of edit operations with the switching information of the intentions.
@inproceedings{hayashi-wws2010,
author = {林 晋平},
title = {時系列への注釈に基づくソースコード変更の再構成},
booktitle = {ウィンターワークショップ2010・イン・倉敷 論文集},
pages = {15--16},
year = 2010,
month = {jan},
}
[hayashi-wws2010]: as a page
本稿ではソースコードの編集操作を変更意図ごとに分類することにより,修正を表す差分を意図ごとに構造化する手法を提案する.提案手法では,開発者はソフトウェア開発環境上でソースコードの編集に加えて意図の切り替えのタイミングを明示することにより,各編集操作を意図ごとに分類する.分類結果をもとに編集操作を並べ替えることにより,ソースコード差分を意図ごとのまとまりに分割できる.本稿では,並べ替えのアルゴリズムを与え,その自動化ツールのプロトタイプを実現した.
This paper proposes a technique for structuring source code deltas on a developer's intentions. In this technique, the developer classifies his/her editing operations by annotating the time series of the editing history with the switching information of the intentions. Based on the classification, the history is automatically sorted and converted to appropriate patches to be committed. This paper also shows algorithms for automating the structuring process and a prototyping tool to implement them.
@article{hayashi-sigss201003,
author = {林 晋平 and 佐伯 元司},
title = {編集操作の分類に基づくソースコード差分の構造化},
journal = {電子情報通信学会技術研究報告},
volume = 109,
number = 456,
pages = {61--66},
year = 2010,
month = {mar},
}
[hayashi-sigss201003]: as a page
本稿では,振る舞いモデルからスケルトンコードや設定ファイルを生成することで,フレームワークの利用を支援するツールを提案する.提案するツールでは,フレームワークの動作モデルにカスタマイズ操作を関連付け,動作モデルと振る舞い要求との対応関係に基づきカスタマイズ操作を適用することでコード生成を行う.振る舞い要求がツールによりフレームワークモデルと自動的に対応付けられるため,利用者はフレームワークの詳細を理解することなくフレームワークの利用法を特定できる.
This paper proposes a tool which generates a skeleton code and configuration files to support framework-based software development. The framework models that the tool uses include behavior and customization operations. The skeleton code and configuration files are generated by applying the customization operations based on mappings of requirements specifications to the behavioral model of frameworks. The requirements specification is automatically mapped to the framework model by the tool. Therefore, users of the tool can identify the usage of the framework without deep understanding of the framework.
@article{zenmyo-sigss201003,
author = {善明 晃由 and 林 晋平 and 佐伯 元司},
title = {振る舞いモデルを用いたフレームワーク利用支援ツール},
journal = {電子情報通信学会技術研究報告},
volume = 109,
number = 456,
pages = {31--36},
year = 2010,
month = {mar},
}
[zenmyo-sigss201003]: as a page
ユースケース記述はソフトウェア開発における要求分析工程で用いられる.しかし,自然言語での記述は非形式的なため,ユースケース記述を網羅的に分析することは困難である.そこで,本論文ではユースケース記述からの状態遷移モデルの生成法を提案する.提案手法ではまず,ユースケース記述中の自然言語記述を解析し,格フレームを用いた形式的表現に変換する.次に,動詞の類義・対義語関係を用いて格フレームから状態変数を抽出する.ユースケース記述中の動作系列間の順序関係と,ユースケース間の関係を遷移として抽出し,状態遷移モデルを生成する.生成された状態遷移モデルをモデルチェッカに適用することで,ユースケース記述の分析を支援する.提案手法を実装した支援ツールを用いてユースケース記述を分析した結果,ユースケース記述が満たしている性質とそうでない性質を正確に判別することができた.
Use case descriptions are often used at the requirements analysis phase in a software development process. Since descriptions written by a natural language are informal, it is difficult to analyze them exhaustively. This paper proposes a method to generate state transition models from use case descriptions. First, we transform the descriptions to case frames. Second, we generate state variables from the case frames by the synonym and antonym relationships of the verbs. Finally, we generate state transition models, extracting inner use case transitions by the execution order of each use case and inter use case transitions by the pre/post-conditions of them. We have implemented a supporting tool for automating the method. A case study for applying the tool to use case descriptions shows the feasibility of the method.
@article{yohei-sigse201003,
author = {高久 陽平 and 林 晋平 and 佐伯 元司},
title = {ユースケース記述からの状態遷移モデル生成},
journal = {情報処理学会研究報告},
volume = {2010-SE-167},
number = 17,
pages = {1--8},
year = 2010,
month = {mar},
}
[yohei-sigse201003]: as a page
本稿では,セキュリティ機能要求を抽出するために,コモンクライテリアに準拠して書かれた文書(セキュリティターゲット)を活用する手法を述べる.セキュリティターゲットを機能要求から,脅威やセキュリティ対策を識別するための知識ソースとして用いる.我々の手法はオントロジを用いたゴール指向要求分析法(GOORE)に組み込まれている.
This paper proposes the usage of security targets, which are documents compliant to Common Criteria (ISO/IEC 15408), as related knowledge sources to identify security functional requirements from functional requirements through eliciting threats and security objectives. Our proposed technique has been combined with GOORE.
@article{saeki-sigkbse201003,
author = {佐伯 元司 and 林 晋平 and 海谷 治彦},
title = {コモンクライテリアをドメイン知識としたゴール指向セキュリティ要求獲得法},
journal = {電子情報通信学会技術研究報告},
volume = 109,
number = 432,
pages = {37--42},
year = 2010,
month = {mar},
}
[saeki-sigkbse201003]: as a page
2010年8月30日から9月1日の3日間に東洋大学(東京都文京区)にて開催したソフトウェアエンジニアリングシンポジウム2010(SES2010)の概要いついて報告する.
This paper reports on ``Software Engineering Symposium 2010 (SES2010)'' held at Toyo University from August 30th to September 1st.
@article{shigo-sigse201011,
author = {紫合 治 and 松下 誠 and 野中 誠 and 丸山 勝久 and 大杉 直樹 and 鹿糠 秀行 and 川口 真司 and 菊地 奈穂美 and 林 晋平 and 真鍋 雄貴},
title = {ソフトウェアエンジニアリングシンポジウム2010開催報告},
journal = {情報処理学会研究報告},
volume = {2010-SE-170},
number = 23,
pages = {1--8},
year = 2010,
month = {nov},
}
[shigo-sigse201011]: as a page
この論文では,属性つきゴール指向要求分析法(AGORA)において,ゴールや枝に付加された属性値を活用し,ゴール指向分析の成果物であるゴールグラフの各ゴールの品質特性を評価する手法を提案する.ゴール指向分析で得られるゴールグラフの各ゴールに対して品質特性を形式的に定義することにより,ゴールグラフの品質を下げる要因になるゴールを計算により同定することが可能になる.ゴールに対する品質特性は,ステークホルダの関心に基づいて行なう.さらに実験により,これらの特性が要求仕様の品質向上に貢献することを示す.
This paper proposes a technique to measure quality characteristics of goals by using attribute values attached to goals and edges, in attributed goal-oriented analysis (AGORA). It enables to identify the goal that is a factor to lower the quality in the goal graph to define the quality property in each goal in a goal graph. The quality property to the goal is defined in a formal way based on stakeholder's concern. In addition, the goal where each quality property is not filled is formally defined from this definition. It is shown that these characteristics contribute to the quality improvement of the specification by the experiment.
@incollection{ugai-fose2010,
author = {鵜飼 孝典 and 林 晋平 and 佐伯 元司},
title = {属性つきゴールグラフにおけるゴールの品質特性},
booktitle = {ソフトウェア工学の基礎XVII --- 第17回ソフトウェア工学の基礎ワークショップ予稿集},
pages = {5--14},
year = 2010,
month = {nov},
}
[ugai-fose2010]: as a page
本稿ではFeature Location(FL)を用いて対話的にソフトウェア機能の実装を理解する手法を提案する.既存のFL手法は理解コストの削減に寄与するものの,機能に対応するコード片特定のための入力の構築は依然として難しい.提案手法では,FLの入力は利用者とシステムとの対話により段階的に改善されていく.利用者は,FLにより発見したコード片を実際に読み,得た理解をもとに入力クエリを改善する.さらに,読んだコード片が理解に貢献したかの判断をシステムに与える適応フィードバックによりFLの評価関数を改善し,より適切な結果を得る.FLとコード片の読解,フィードバックを対話的に繰り返すことにより,利用者は効率的に機能の実装を理解する.事例により非対話的手法との比較を行った結果,提案手法は理解の効率化に貢献することがわかった.
This paper proposes an interactive approach for effectively understanding a feature implementation by applying feature location (FL). Although existing FL techniques can reduce the understanding cost, it is still open issue to construct the appropriate inputs for which we identify the locations of features in source code. In our approach, the inputs of FL are incrementally improved by the interaction between users and the FL system. By understanding a code fragment obtained by FL, we can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback mechanism by partially judging whether a fragment is required to understand improves the evaluation function of FL, and users then can obtain more accurate results. Case studies show that our interactive approach is feasible and can reduce the understanding cost more effectively than the non-interactive approach.
@incollection{sekine-ses2010,
author = {関根 克幸 and 林 晋平 and 佐伯 元司},
title = {Feature Locationを用いたソースコード理解の対話的支援},
booktitle = {ソフトウェアエンジニアリング最前線2010 --- ソフトウェアエンジニアリングシンポジウム2010予稿集},
pages = {9--16},
year = 2010,
month = {aug},
}
[sekine-ses2010]: as a page
ソフトウェア開発では,様々な場面で,同じプロジェクトの新旧版を比較し,そのソースコード差分を理解する.しかし,リファクタリングが行われた版間ではリファクタリングとそうでない変更の差分が混在し,理解が難しくなる.本稿では,ソースコード差分からリファクタリングを抽出し, ソースコードに適用することにより,新旧版間のリファクタリングの影響を除外する手法を提案する.また,抽出したリファクタリングの逆操作を新版に適用し,旧版の識別子で表す差分の表示方法も提案した.支援ツールを実装し,予備評価を行った結果では,平均で19\%以上のソースコード差分を減少させることができた.
In various situations of software development, comparing and understanding the difference between old and new versions of project is required. However, if refactorings are performed between those versions, the source code difference will be intermingled and the understanding will become difficult. In this paper, we propose a technique to support the understanding of source code difference by extracting the refactorings and performing them to the source code. We also propose the old-version-view which shows the difference by using the elements of the old version. Finally, we have developed a support tool and applied it to several programs. The result shown that our technique can decrease more than 19 percent of source code difference in average.
@incollection{zui-ses2010,
author = {タンタムマチット シリナット and 林 晋平 and 佐伯 元司},
title = {リファクタリングの抽出・適用によるソースコード差分の理解支援},
booktitle = {ソフトウェアエンジニアリング最前線2010 --- ソフトウェアエンジニアリングシンポジウム2010予稿集},
pages = {17--20},
year = 2010,
month = {aug},
}
[zui-ses2010]: as a page
近年ますます大規模化するソフトウェアの生産性・保守性向上のためには既存のソフトウェア資産を再利用して新規開発量を減らすことが大切だと考える。 NTT情報流通プラットフォーム研究所では、ソフトウェア開発リポジトリ内で要件定義書、機能設計書、ソースコード等のソフトウェア資産を有機的に結びつけ、それらの「再利用」、「トレーサビリティの確保」および「ソフトウェア共通部分のプラットフォーム化に向けた分析の実現」を目標としている。また、東京工業大学の佐伯研究室では、要件定義書などからソフトウェアが対象とする分野のオントロジを作成し、要求分析や同じ分野のソフトウェア開発支援に活用する研究が行われている。本発表では東工大の研究成果を利用し、NTT保有のソフトウェア資産に基づくオントロジを活用した開発リポジトリの取り組みを紹介する。
@inproceedings{yamamoto-swc2010,
author = {山本 具英 and 佐藤 宏之 and 小林 透 and 高橋 健司 and 林 晋平 and 佐伯 元司},
title = {セマンティックWebのソフトウェア資産への適用},
booktitle = {In セマンティックWebコンファレンス2010 予稿集},
year = 2010,
month = {mar},
}
[yamamoto-swc2010]: as a page
@article{hayashi-ipsjm201007,
author = {林 晋平},
title = {研究会推薦博士論文速報: 開発履歴を用いたリファクタリング支援の研究},
journal = {情報処理},
volume = 51,
number = 7,
year = 2010,
month = {jul},
}
[hayashi-ipsjm201007]: as a page
ATTED-II (http://atted.jp) is a database of gene coexpression in Arabidopsis that can be used to design a wide variety of experiments, including the prioritization of genes for functional identification or for studies of regulatory relationships. Here, we report updates of ATTED-II that focus especially on functionalities for constructing gene networks with regard to the following points: (i) introducing a new measure of gene coexpression to retrieve functionally related genes more accurately, (ii) implementing clickable maps for all gene networks for step-by-step navigation, (iii) applying Google Maps API to create a single map for a large network, (iv) including information about protein-protein interactions, (v) identifying conserved patterns of coexpression and (vi) showing and connecting KEGG pathway information to identify functional modules. With these enhanced functions for gene network representation, ATTED-II can help researchers to clarify the functional and regulatory networks of genes in Arabidopsis.
@article{obayashi-nar200901,
author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
title = {{ATTED-II} provides coexpressed gene networks for Arabidopsis},
journal = {Nucleic Acids Research},
volume = 37,
number = {Database},
pages = {D987--D991},
year = 2009,
month = {jan},
}
[obayashi-nar200901]: as a page
The Object Constraint Language (OCL) carries a platform independent characteristic allowing it to be decoupled from implementation details, and therefore it is widely applied in model transformations used by model-driven development techniques. However, OCL can be found tremendously useful in the implementation phase aiding assertion code generation and allowing system verification. Yet, taking full advantage of OCL without destroying its platform independence is a difficult task. This paper proposes an approach for generating assertion code from OCL constraints by using a model transformation technique to abstract language specific details away from OCL high-level concepts, showing wide applicability of model transformation techniques. We take advantage of structural similarities of implementation languages to describe a rewriting framework, which is used to easily and flexibly reformulate OCL constraints into any target language, making them executable on any platform. A tool is implemented to demonstrate the effectiveness of this approach.
@inproceedings{rodion-models2009,
author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki},
title = {Generating Assertion Code from OCL: A Transformational Approach Based on Similarities of Implementation Languages},
booktitle = {Proceedings of the ACM/IEEE 12th International Conference on Model Driven Engineering Languages and Systems},
pages = {650--664},
year = 2009,
month = {oct},
}
[rodion-models2009]: as a page
This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, prioritizing goals, and version control of goal graphs.
@inproceedings{saeki-ase2009,
author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
title = {A Tool for Attributed Goal-Oriented Requirements Analysis},
booktitle = {Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering},
pages = {670--672},
year = 2009,
month = {nov},
}
[saeki-ase2009]: as a page
This paper proposes an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with the functional descriptions written in the form of simple sentences, the relationships between source code structures and problem domains are important. In our approach, we model the knowledge of the problem domains as domain ontologies. By using semantic relationships of the ontologies in addition to method invocation relationships and the similarity between an identifier on the code and words in the sentences, we can detect code fragments corresponding to the sentences. A case study within a domain of painting software shows that we obtained results of higher quality than without ontologies.
@inproceedings{yoshikawa-icsm2009,
author = {Takashi Yoshikawa and Shinpei Hayashi and Motoshi Saeki},
title = {Recovering Traceability Links between a Simple Natural Language Sentence and Source Code Using Domain Ontologies},
booktitle = {Proceedings of the 25th International Conference on Software Maintenance},
pages = {551--554},
year = 2009,
month = {sep},
}
[yoshikawa-icsm2009]: as a page
This paper proposes a systematic approach to derive feature models required in a software product line development. In our approach, we use goal graphs constructed by goal-oriented requirements analysis. We merge multiple goal graphs into a graph, and then regarding the leaves of the merged graph as the candidates of features, identify their commonality and variability based on the achievability of product goals. Through a case study of a portable music player domain, we obtained a feature model with high quality.
@inproceedings{uno-qsic2009,
author = {Kohei Uno and Shinpei Hayashi and Motoshi Saeki},
title = {Constructing Feature Models using Goal-Oriented Analysis},
booktitle = {Proceedings of the 9th International Conference on Quality Software},
pages = {412--417},
year = 2009,
month = {aug},
}
[uno-qsic2009]: as a page
In this paper, we propose a model-driven development technique specific to the Model-View-Controller architecture domain. Even though a lot of application frameworks and source code generators are available for implementing this architecture, they do depend on implementation specific concepts, which take much effort to learn and use them. To address this issue, we define a UML profile to capture architectural concepts directly in a model and provide a bunch of transformation mappings for each supported platform, in order to bridge between architectural and implementation concepts. By applying these model transformations together with source code generators, our MVC-based model can be mapped to various kind of platforms. Since we restrict a domain into MVC architecture only, automating model transformation to source code is possible. We have prototyped a supporting tool and evaluated feasibility of our approach through a case study. It demonstrates model transformations specific to MVC architecture can produce source code for two different platforms.
@inproceedings{kazato-dsm2009,
author = {Hiroshi Kazato and Rafael Weiss and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
title = {Model-View-Controller Architecture Specific Model Transformation},
booktitle = {Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling},
year = 2009,
month = {oct},
}
[kazato-dsm2009]: as a page
情報検索においては,検出結果と正解セットとを比較して算出した適合率や再現率を用いて手法を評価することが一般である.本稿では,筆者らのデザインパターン検出研究経験を例に,ソースコードからの情報抽出結果の評価法について議論する.
In information retrieval, we evaluate detecting techniques by calculating precisions and recalls with comparing the results and the control set. This paper discusses issues and concerns of information extraction from a source code through our experiences for detecting design patterns.
@inproceedings{hayashi-wws2009,
author = {林 晋平 and 小林 隆志},
title = {情報検出手法の評価について: デザインパターン検出を例に},
booktitle = {ウインターワークショップ2009・イン・宮崎 論文集},
pages = {27--28},
year = 2009,
month = {jan},
}
[hayashi-wws2009]: as a page
本稿では,「プログラム解析」セッションで予定される議論テーマについて紹介する.
This paper briefly introduces the topics to be discussed in the ``Program Analysis'' session.
@inproceedings{tkobaya-wws2009,
author = {小林 隆志 and 林 晋平},
title = {多量のソフトウェア関連データを用いた開発支援---「プログラム解析」セッションの紹介},
booktitle = {ウインターワークショップ2009・イン・宮崎 論文集},
pages = {1--2},
year = 2009,
month = {jan},
}
[tkobaya-wws2009]: as a page
This paper presents a supporting tool for requirements changes in Attributed Goal Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. It is based on the presentation that the authors had in Requirements Engineering Conference 2008 (RE2008).
@incollection{saeki-ses2009,
author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
title = {Tool Support for Requirements Changes in AGORA},
booktitle = {ソフトウェアエンジニアリング最前線2009 --- ソフトウェアエンジニアリングシンポジウム2009予稿集},
pages = {49--50},
year = 2009,
month = {sep},
}
[saeki-ses2009]: as a page
著者らが開発した属性を付加したゴールグラフに基づく要求分析法を支援するツールについて述べる.このツールは要求獲得から要求管理までのプロセスを継ぎ目なく支援する.
This paper presents an integrated supporting tool for requirements analysis using attributed goal graphs. The tool assists seemlessly requirements analysts and stakeholders in their activities for requirements analysis.
@article{saeki-sigss200905,
author = {佐伯 元司 and 林 晋平 and 海谷 治彦},
title = {属性つきゴール指向要求分析法の支援のための統合ツール},
journal = {電子情報通信学会技術研究報告},
volume = 109,
number = 40,
pages = {13--18},
year = 2009,
month = {may},
}
[saeki-sigss200905]: as a page
ソフトウェアの保守プロセスでは,保守対象の機能の実装上の振る舞いを理解する必要があるため,対象機能の実装箇所特定(Feature Location)が重要である.Feature Locationでは,対象ソフトウェアに対する十分な事前知識なしに,必要な情報を過不足なく取得できることが望まれる.本論文では動的スライシングを応用したFeature Location手法を提案する.提案手法では実行系列から抽出されるスライスを対象機能に対応する実装箇所の候補とみなし,スライスとその包含関係からなるグラフ上で最適なスライスを探索する.入力された対象機能の特徴とスライスの類似性に基づいて対話的に探索することで,漸進的なFeature Locationが行える.本研究では手法を実現するツールを実装するとともに,事例によってその有用性を確認した.
To understand the behavior of a feature in the software maintenance process, identifying the location in which the feature is implemented, i.e., feature location, is important. Feature location should be performed without rich knowledge of the implementation. In this paper, we propose an incremental feature location technique using dynamic slicing. In the technique, slices extracted from an execution trace are regarded as candidate portions in which the feature is implemented. To assist maintainers to find the suitable slice, similarity between the slices and characteristics of the feature is calculated. By inputting the characteristics, the maintainers can interactively search for the suitable slice on the graph consisting of the slices and the inclusion relations based on the similarity.
@article{sekine-sigss200908,
author = {関根 克幸 and 善明 晃由 and 林 晋平 and 佐伯 元司},
title = {動的スライシングを用いた漸進的Feature Location手法},
journal = {電子情報通信学会技術研究報告},
volume = 109,
number = 170,
pages = {25--30},
year = 2009,
month = {aug},
}
[sekine-sigss200908]: as a page
In this paper, we present the development of a model-driven approach to transform platform independent models (PIMs) based on architectural patterns. Model Transformation is a fundamental concept in nowadays software development to manipulate models during its lifecycle e.g. due to changing requirements or platform technologies. We use model transformation techniques to transform profile-enriched UML2 models into platform specific models (PSMs). These PSMs could be used later as an input for common code generation frameworks to derive platform specific implementations (PSIs). As an example of a possible architectural pattern, we define a UML profile that is based on the well-known Model-View-Controller (MVC) pattern, an architectural pattern commonly used in software engineering to isolate business logic from user interface considerations.
@article{rweiss-sigse200907,
author = {Rafael Weiss and Hiroshi Kazato and Shinpei Hayashi and Motoshi Saeki},
title = {Platform Independent Model Transformation based on Architectural Patterns},
journal = {情報処理学会研究報告},
volume = {2009-SE-165},
number = 4,
pages = {1--10},
year = 2009,
month = {jul},
}
[rweiss-sigse200907]: as a page
実装プラットフォームの組み合わせがシステム全体の品質特性にどのような影響を及ぼすかを把握することは重要である.本稿では品質特性と実装プラットフォームの因果関係をベイジアンネットワークでモデル化し,実装プラットフォームの選択を支援する手法を提案する.また,ベイジアンネットワークの検証ツール上に提案手法を実装し,業務アプリケーションの例題へ適用することによりその有効性を示す.
It is important to understand how a combination of implementation platforms influences quality attributes on a system. In this paper, we propose a technique to choose implementation platforms by modeling casual dependencies between requirements and platforms probabilistically using Bayesian networks. We have implemented our technique on a Bayesian network tool and applied it to a case study of a business application to show its effectiveness.
@article{kazato-sigse200907,
author = {風戸 広史 and Rafael Weiss and 林 晋平 and 小林 隆志 and 佐伯 元司},
title = {ベイジアンネットワークを用いた実装プラットフォームの選択支援},
journal = {情報処理学会研究報告},
volume = {2009-SE-165},
number = 3,
pages = {1--8},
year = 2009,
month = {jul},
}
[kazato-sigse200907]: as a page
ソフトウェア開発の要求分析工程において,要求の欠落を補填する支援法が望まれている.本稿では,パッケージソフトウェアをはじめとする,再利用可能な実現構造を知識資源として用いて,ユースケース記述における事前条件や動作系列の欠落を補填する手法を提案する.提案手法では,ユースケース記述と実現構造をラベル付き状態遷移システムに基づいてモデル化する.両モデルを合成し,実現構造の機能が実行される条件を満たしていないユースケース記述の箇所を特定することにより,条件を満たすようにユースケース記述の補填を行う.新規にSNSを開発する事例に対し,SNSパッケージOpenPNEから構築した知識資源を用いて本手法を適用した結果,1つのユースケース記述から,欠落が妥当に補填された8通りのユースケース記述を得ることができた.
In a requirements analysis process, supporting requirements elicitation is important. In this paper, we propose a technique to derive preconditions and events to be added to use case descriptions by using reusable implementation structures as knowledge resources. The descriptions and the implementation structures are modeled by labeled transition systems (LTS). The two models are composed, and then, the composed model is examined whether preconditions of the functions in the implementation structure do not hold. If such situation exists, the ways for completing the descriptions are identified based on the differences between two models. As a case study, we have applied the proposed technique to a use case of an SNS site with OpenPNE as knowledge resources. As a result, we have obtained eight appropriate use case descriptions from one use case description including missing requirements.
@article{akemine-sigse200903,
author = {朱峰 錦司 and 善明 晃由 and 林 晋平 and 佐伯 元司},
title = {要求仕様と再利用可能な実現構造の振る舞いの差分検出に基づく要求分析},
journal = {情報処理学会研究報告},
volume = 2009,
number = 31,
pages = {33--40},
year = 2009,
month = {mar},
}
[akemine-sigse200903]: as a page
本論文ではゴール指向要求分析法の成果物であるゴールグラフを用いてフィーチャモデルを体系的に導出する手法を提案する.ソフトウェアプロダクトライン開発を始めるにあたってフィーチャモデルを作成する必要がある.そのためにはフィーチャの特定及びフィーチャの共通性と可変性の分析が求められる.提案手法では,まず複数の派生製品のゴールグラフを統合することでプロダクトファミリ全体の要求を把握する.統合したゴールグラフの葉ゴールからフィーチャを特定し,初期ゴールを達成するための各葉ゴールの達成条件からフィーチャの共通性と可変性を分析する.さらにフィーチャに対応する葉ゴールの親ゴールからフィーチャの存在理由であるラショナーレを導出することにより,フィーチャモデルの妥当性を調べることが可能となる.提案手法を自動化するツールを実装し,携帯音楽プレイヤーに手法を適用したところ,高品質のフィーチャモデルを得ることができた.
This paper proposes a systematic approach to derive feature models required by a softwareproduct line development. In order to construct a feature model, we have to detect features and their commonalities/variabilities. In our approach, we use goal graphs constructed by goal-oriented requirements analysis. We first merge multiple goal graphs into a graph representing the product family’s requirements. We then regard the leaves of the merged graph as the candidates of features. Commonalities and variabilities are analyzed by the differences among graphs. Feature rationales derived from the graph enables us to validate the feature model. Through a case study of a portable music player domain, we obtained a feature model with high quality.
@article{uno-sigse200903,
author = {宇野 耕平 and 林 晋平 and 佐伯 元司},
title = {ゴールグラフからのフィーチャモデル導出},
journal = {情報処理学会研究報告},
volume = 2009,
number = 31,
pages = {1--8},
year = 2009,
month = {mar},
}
[uno-sigse200903]: as a page
ソフトウェアに関わる自然言語文書とソースコードの間の追跡可能性を復元する手法が望まれる.本稿では,ドメインオントロジを用いてソフトウェアの機能を記述した自然言語文書とソースコードとを対応付ける手法を提案する.文書中の単語とコード上の識別子との類似性に基づく関係と,コード上のメソッド呼び出し関係の評価にオントロジによる意味的関係を考慮することで,詳細ではない文書に対しても高精度の対応付けを行う.オープンソースソフトウェアJDraw に対する適用事例では,オントロジを用いない場合と比較して高精度の対応付け結果を得た.
Recovering traceability links between a source code and their NL documents is significant. In this paper, we propose a technique for recovering the links between functional descriptions and a source code using domain ontologies. By using semantic relationships of the domain ontologies in addition to method-call relationships and the similarity between an identifier on the code and words in the descriptions, we can detect source code fragments corresponding to the descriptions. Through a case study using open-source software JDraw, we obtained results of higher quality than without ontologies.
@article{yoshikawa-sigse200903,
author = {吉川 嵩志 and 林 晋平 and 佐伯 元司},
title = {ドメインオントロジを用いた自然言語文書とソースコード間の追跡可能性の復元},
journal = {情報処理学会研究報告},
volume = 2009,
number = 31,
pages = {129--136},
year = 2009,
month = {mar},
}
[yoshikawa-sigse200903]: as a page
2009年1月23日~24日の2日間に渡り宮崎市にて開催したウインターワークショップ2009・イン・宮崎(WW2009)の概要について報告する.
This paper reports on ``Winter Workshop 2009 in Miyazaki (WW2009)'' held in Miyazaki City from 23rd to 24th January 2009.
@article{fukuyasu-sigse200905,
author = {福安 直樹 and 小林 隆志 and 林 晋平 and 中鉢 欣秀 and 中村 匡秀 and 鹿糠 秀行 and 羽生田 栄一 and 鷲崎 弘宜 and 阿萬 裕久},
title = {ウインターワークショップ2009・イン・宮崎 開催報告},
journal = {情報処理学会研究報告},
volume = {2009-SE-164},
number = 20,
pages = {1--7},
year = 2009,
month = {may},
}
[fukuyasu-sigse200905]: as a page
2008年12月2-5日に北京にて開催された第15回アジア太平洋ソフトウェア工学国際会議(APSEC 2008)に関して,我々の見解を述べる.
This paper gives our views on the 15th Asia-Pacific Software Engineering Conference (APSEC 2008) held at Beijing on December 2-5, 2008.
@article{tkobaya-sigse200907,
author = {小林 隆志 and 林 晋平 and 外村 慶二 and 天嵜 聡介},
title = {第15回アジア太平洋ソフトウェア工学国際会議(APSEC 2008)参加報告},
journal = {情報処理学会研究報告},
volume = {2009-SE-165},
number = 10,
pages = {1--7},
year = 2009,
month = {jul},
}
[tkobaya-sigse200907]: as a page
デザインパターンの有効性を保つためには,デザインパターンの適切な進化が必要である.しかし,デザインパターンは非形式的に記述されるため,その構造の把握は容易でなく,誤った進化を引き起こす.デザインパターンの進化を支援する手法とツールは,明確化された各デザインパターンの構造,および進化の妥当性の確認基準を必要とする.本稿では,デザインパターンの構造のモデル化と構造モデルに基づくパターン進化を提案する.DECORATORパターンの進化を事例とし,モデルに基づきパターン進化を表現できることを示す.また,進化の妥当性の確認基準として利用できる構造特性が得られることを示す.
To preserve the effectiveness of design patterns, we must evolve design patterns appropriately. However, evolving design patterns is not an easy task because informally described design patterns are difficult to understand the underlying structure. To develop methods and tools for supporting the evolution, we first need to make the structure of design patterns explicit, and need a set of criteria for validating the evolution. In this article we propose a model-based approach for evolving design patterns in terms of their structures. We validate our approach with the evolution of the DECORATOR pattern as a case study. We then propose a structural property to be used as an evolution criterion.
@incollection{asato-ses2009,
author = {下滝 亜里 and 林 晋平 and 青山 幹雄},
title = {モデルに基づくデザインパターンの進化},
booktitle = {ソフトウェアエンジニアリング最前線2009 --- ソフトウェアエンジニアリングシンポジウム2009予稿集},
pages = {83--88},
year = 2009,
month = {sep},
}
[asato-ses2009]: as a page
類似しているソースコード片を1つにまとめることは有用だが,そのためにはまとめる効果とコストのトレードオフを考慮した判断が求められる.効果や変更可能性の判断の一部を該当コードの開発者によりコード記述時に行うことが,コード片をまとめる判断のコスト低下につながる.
@misc{hayashi-jwscs2009,
author = {林 晋平},
title = {類似コードをまとめるためのコード記述時における開発者の利用},
howpublished = {In ソースコードの類似性ワークショップ},
year = 2009,
month = {sep},
}
[hayashi-jwscs2009]: as a page
One of the approaches to improve program understanding is to extract what kinds of design pattern are used in existing object-oriented software. This paper proposes a technique for efficiently and accurately detecting occurrences of design patterns included in source codes. We use both static and dynamic analyses to achieve the detection with high accuracy. Moreover, to reduce computation and maintenance costs, detection conditions are hierarchically specified based on Pree's meta patterns as common structures of design patterns. The usage of Prolog to represent the detection conditions enables us to easily add and modify them. Finally, we have implemented an automated tool as an Eclipse plug-in and conducted experiments with Java programs. The experimental results show the effectiveness of our approach.
@article{hayashi-ieicet200804,
author = {Shinpei Hayashi and Junya Katada and Ryota Sakamoto and Takashi Kobayashi and Motoshi Saeki},
title = {Design Pattern Detection by Using Meta Patterns},
journal = {IEICE Transactions on Information and Systems},
volume = {E91-D},
number = 4,
pages = {933--944},
year = 2008,
month = {apr},
}
[hayashi-ieicet200804]: as a page
A database of coexpressed gene sets can provide valuable information for a wide variety of experimental designs, such as targeting of genes for functional identification, gene regulation and/or protein-protein interactions. Coexpressed gene databases derived from publicly available GeneChip data are widely used in Arabidopsis research, but platforms that examine coexpression for higher mammals are rather limited. Therefore, we have constructed a new database, COXPRESdb (coexpressed gene database) (http://coxpresdb.hgc.jp), for coexpressed gene lists and networks in human and mouse. Coexpression data could be calculated for 19 777 and 21 036 genes in human and mouse, respectively, by using the GeneChip data in NCBI GEO. COXPRESdb enables analysis of the four types of coexpression networks: (i) highly coexpressed genes for every gene, (ii) genes with the same GO annotation, (iii) genes expressed in the same tissue and (iv) user-defined gene sets. When the networks became too big for the static picture on the web in GO networks or in tissue networks, we used Google Maps API to visualize them interactively. COXPRESdb also provides a view to compare the human and mouse coexpression patterns to estimate the conservation between the two species.
@article{obayashi-nar200801,
author = {Takeshi Obayashi and Shinpei Hayashi and Masayuki Shibaoka and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
title = {{COXPRESdb}: a database of coexpressed gene networks in mammals},
journal = {Nucleic Acids Research},
volume = 36,
number = {Database},
pages = {D77--D82},
year = 2008,
month = {jan},
}
[obayashi-nar200801]: as a page
Gene coexpression provides key information to understand living systems because coexpressed genes are often involved in the same or related biological pathways. Coexpression data are now used for a wide variety of experimental designs, such as gene targeting, regulatory investigations and/or identification of potential partners in protein-protein interactions. We constructed two databases for Arabidopsis (ATTED-II, http://www.atted.bio.titech.ac.jp) and mammals (COXPRESdb, http://coxpresdb.hgc.jp). Based on pairwise gene coexpression, coexpressed gene networks were prepared in these databases. To support gene coexpression, known protein-protein interactions, common metabolic pathways and conserved coexpression were also represented on the networks. We used Google Maps API to visualize large networks interactively. The relationships of the coexpression database with other large-scale data will be discussed, in addition to data construction procedures and typical usages of coexpression data.
@misc{obayashi-icar2008,
author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
title = {Preperation and usage of gene coexpression data},
howpublished = {In the 19th International Conference on Arabidopsis Research},
year = 2008,
month = {jun},
}
[obayashi-icar2008]: as a page
This paper proposes an automated technique to extract prehistories of software refactorings from existing software version archives, which in turn a technique to discover knowledge for finding refactoring opportunities. We focus on two types of knowledge to extract: characteristic modification histories, and fluctuations of the values of complexity measures. First, we extract modified fragments of code by calculating the difference of the Abstract Syntax Trees in the programs picked up from an existing software repository. We also extract past cases of refactorings, and then we create traces of program elements by associating modified fragments with cases of refactorings for finding the structures that frequently occur. Extracted traces help us identify how and where to refactor programs, and it leads to improve the program design.
@inproceedings{hayashi-lkr2008,
author = {Shinpei Hayashi and Motoshi Saeki},
title = {Extracting Prehistories of Software Refactorings from Version Archives},
booktitle = {Large-Scale Knowledge Resources. Construction and Application -- Proceedings of the 3rd International Conference on Large-Scale Knowledge Resources},
pages = {82--89},
year = 2008,
month = {mar},
}
[hayashi-lkr2008]: as a page
This paper proposes a novel technique to detect the occurrences of refactoring from a version archive, in order to reduce the effort spent in understanding what modifications have been applied. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that understanding the differences between two versions stored in the archive is not usually an easily process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. We have clearly demonstrated the feasibility of our approach through a case study.
@inproceedings{hayashi-apsec2008,
author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki},
title = {Detecting Occurrences of Refactoring with Heuristic Search},
booktitle = {Proceedings of the 15th Asia-Pacific Software Engineering Conference},
pages = {453--460},
year = 2008,
month = {dec},
}
[hayashi-apsec2008]: as a page
本稿では,プログラム改善の再利用支援に利用する,既存の版管理履歴中の改善例に関わる変更を閲覧・分析するためのツールについて述べる.
This paper proposes an automated tool for analyzing program modifications corresponding to examples of program refinements included in existing software version archives. The tool contributes to reuse the refinements to current software.
@inproceedings{hayashi-wws2008,
author = {林 晋平},
title = {プログラム改善の分析のための変更閲覧環境},
booktitle = {ウインターワークショップ2008・イン・道後 論文集},
pages = {25--26},
year = 2008,
month = {jan},
}
[hayashi-wws2008]: as a page
@misc{hayashi-dendai-seminar2008,
author = {林 晋平},
title = {ソフトウェア芸術主義 vs. ソフトウェア工学},
howpublished = {情報セキュリティ研究室 プログラミング講習(招待講演)},
year = 2008,
month = {jul},
}
[hayashi-dendai-seminar2008]: as a page
ソフトウェア開発を効率よく行うために,仕様書とソースコードは互いに対応付いていなければならない.本稿では,自然言語で書かれた仕様書とJavaソースコードの対応付けを行う手法を提案する.ソースコードを文書の一種とみなし,単語の類似性により対応付く箇所を探す.提案手法では,仕様書の章構造とソースコードから抽出できる構造の類似性を用いて対応付けを行うことにより,精度の向上をはかる.適用事例を通して,本手法の有用性を確認した.
The specification document and the source code of a software project have to be traceable each other for developing the software effectively. In this paper, we propose a technique how to recover the traceability links between a specification written in a natural language and a Java source code. In our approach, we consider a source code as a kind of a document, and detect the parts of a specification corresponding to the source code fragment using a similarity of word occurrences. Furthermore, to improve the detection precision, we use the similarity of the document structures of the specification and the source code. Through a case study, we validate the feasibility of our approach.
@article{tahara-sigse200803,
author = {田原 貴光 and 林 晋平 and 佐伯 元司},
title = {仕様書と{Java}ソースコードの構造の類似性に基づく対応付け},
journal = {情報処理学会研究報告},
volume = 2008,
number = 29,
pages = {139--146},
year = 2008,
month = {mar},
}
[tahara-sigse200803]: as a page
本稿では,ソフトウェアメトリクスの値の変化をコードエディタ上に可視化することにより,開発者のプログラム変更を支援する手法を提案する.プログラムの保守品質を高く保つためには,プログラム変更時にそれを低下させないことが望ましい.提案手法では,開発者が対象プログラムに対して行った変更を,それによって生じた対象プログラムでのメトリクス値の変化で評価する.評価値をコードエディタなどのソフトウェア開発環境上に可視化することにより,開発者は不適切な変更を早期に認識し,改善することができる.メトリクス値の変化の評価の際には,その基準を対象プログラムの過去の変更履歴を考慮して与えることにより可視化すべき対象のプログラム片を制限し,可視化が開発者のコーディング作業に与える負の影響を抑制する.
This paper proposes a novel technique for supporting program modifications by visualizing the fluctuations of software metric values. In program editing processes, it is useful to modify programs without depressing its maintainability. Our technique evaluates program modifications on the changes of software metric values caused by them. In order to detect and refine ill-modifications, we immediately visualize the changes after the program modifications on a software development environment such as a code editor. Inferring evaluation criteria with the version archive of the program, we then can restrict the target of visualization for reducing affects of the developers' coding process.
@article{hayashi-sigse200803,
author = {林 晋平 and 佐伯 元司},
title = {メトリクス値の変化の可視化によるプログラム変更の支援},
journal = {情報処理学会研究報告},
volume = 2008,
number = 29,
pages = {115--122},
year = 2008,
month = {mar},
}
[hayashi-sigse200803]: as a page
ソースコードに対する変更として適切なものを選択することは難しい.本稿では,ソフトウェア開発プロジェクトの方針に基づいて,各開発者が適切な変更を選択することを支援する手法を提案する.提案手法では,開発者個人の主観による影響を抑制するために,複数のソフトウェアメトリクスを統合した評価関数によって変更の選択肢の優劣を判断する.また,プロジェクトの方針に基づいた選択を実現するために,ソースコードに対する変更の選択を,複数のメトリクスを評価項目とする多目的意思決定とみなすことにより,評価関数の作成を階層分析法を応用した方法によって行う.本稿で行った評価では,提案手法は変更の選択の支援に有用であるという結論を得た.
Selecting the most appropriate alternatives of source code modification is difficult. This paper proposes a technique to help have each developer of software development project selecting appropriate modifications based on the project's commitment. In the technique, we judge the order of superiority of alternative modifications by creating an evaluation function with integrating multiple software metrics to suppress the influence of each developer's subjectivity. Considering selecting an alternative of source code modification as a multiple criteria decision making, we create the function with Analytic Hierarchy Process. An evaluation shows the efficiency of the technique.
@article{sasaki-y-sigse200803,
author = {佐々木 祐輔 and 林 晋平 and 佐伯 元司},
title = {ソフトウェアメトリクスの統合によるソースコード変更の選択},
journal = {情報処理学会研究報告},
volume = 2008,
number = 29,
pages = {123--130},
year = 2008,
month = {mar},
}
[sasaki-y-sigse200803]: as a page
2007年12月3-5日名古屋にて開催された第14回アジア太平洋ソフトウェア工学国際会議(APSEC 2007)に関して,主催者側および参加者側からの見解を述べる.
This paper gives our views on the 14th Asia-Pacific Software Engineering Conference (APSEC 2007) held at Nagoya on December 3-7, 2007.
@article{maruyama-sigse200803,
author = {丸山 勝久 and 川口 真司 and 名倉 正剛 and 林 晋平 and 鷲崎 弘宜 and 羽生田 栄一},
title = {第14回アジア太平洋ソフトウェア工学国際会議({APSEC 2007})開催および参加報告},
journal = {情報処理学会研究報告},
volume = 2008,
number = 29,
pages = {227--234},
year = 2008,
month = {mar},
}
[maruyama-sigse200803]: as a page
@inbook{washizaki-engineermind200801,
author = {鷲崎 弘宜 and 林 晋平 and 羽生田 栄一},
title = {第14回アジア太平洋ソフトウェア工学国際会議{APSEC}開催},
booktitle = {エンジニアマインド},
pages = {204--205},
publisher = {技術評論社},
year = 2008,
month = {jan},
}
[washizaki-engineermind200801]: as a page
Publicly available database of co-expressed gene sets would be a valuable tool for a wide variety of experimental designs, including targeting of genes for functional identification or for regulatory investigation. Here, we report the construction of an Arabidopsis thaliana trans-factor and cis-element prediction database (ATTED-II) that provides co-regulated gene relationships based on co-expressed genes deduced from microarray data and the predicted cis elements. ATTED-II (http://www.atted.bio.titech.ac.jp) includes the following features: (i) lists and networks of co-expressed genes calculated from 58 publicly available experimental series, which are composed of 1388 GeneChip data in A.thaliana; (ii) prediction of cis-regulatory elements in the 200 bp region upstream of the transcription start site to predict co-regulated genes amongst the co-expressed genes; and (iii) visual representation of expression patterns for individual genes. ATTED-II can thus help researchers to clarify the function and regulation of particular genes and gene networks.
@article{obayashi-nar200701,
author = {Takeshi Obayashi and Kengo Kinoshita and Kenta Nakai and Masayuki Shibaoka and Shinpei Hayashi and Motoshi Saeki and Daisuke Shibata and Kazuki Saito and Hiroyuki Ohta},
title = {{ATTED-II}: a database of co-expressed genes and {\it cis} elements for identifying co-regulated gene groups in {\it Arabidopsis}},
journal = {Nucleic Acids Research},
volume = 35,
number = {Database},
pages = {D863--D869},
year = 2007,
month = {jan},
}
[obayashi-nar200701]: as a page
本稿では,既存の多くのプログラム解析手法がオンライン型の利用に向かないことを示す.またそれを改善するために,別途軽量解析を行い,解析手法を選択的に適用する枠組みについて述べる.
In this paper, we describe previous program analyzing methods hardly can be applied online, then we discuss the framework of lightweight analysis for selecting and applying these analyzing methods.
@inproceedings{hayashi-wws2007,
author = {林 晋平},
title = {プログラム解析手法の選択・適用のための軽量解析の枠組み},
booktitle = {ウィンターワークショップ2007・イン・那覇 論文集},
pages = {25--26},
year = 2007,
month = {jan},
}
[hayashi-wws2007]: as a page
@misc{hayashi-dendai-seminar2007,
author = {林 晋平},
title = {ソフトウェア芸術主義とソフトウェア工学},
howpublished = {In 情報セキュリティ研究室 プログラミング講習(招待講演)},
year = 2007,
month = {jul},
}
[hayashi-dendai-seminar2007]: as a page
The Object Constraint Language carries a platform independent characteristic, which allows is to be decoupled from the platform specific implementation details, yet on the other hand, it can be found tremendously useful in the implementation phase aiding test case generation and allowing system verification. However, taking full advantage of OCL without destroying its platform independence is a difficult task. This paper proposes an approach to tackle this problem, by taking advantage of hierarchical structural similarities of programming languages to describe a rewriting framework, which is used to easily and flexibly reformulate OCL constraints into any target language, thus making them executable on any platform. A tool is implemented to demonstrate the effectiveness of this approach.Extracting refactorings from the development history is useful for software understanding. This paper proposes a technique to identify refactorings performed between two revisions using a search algorithm. In this technique, we consider a program as a state and refactoring as a transition, and then search for refactorings that reach from the initial state to the final state. In searching, we calculate the difference between the current state and the final state for choosing the next refactoring to apply and also estimate the heuristic distance to the final state. By using this tecnique we can detect related refactorings performed at the same time. Finally, we implemented a tool and evaluated its effectiveness.
@article{rodion-sigse200709,
author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki},
title = {Implementing {OCL} Evaluators Based on Structural Similarities of Programming Languages},
journal = {情報処理学会研究報告},
volume = 2007,
number = 97,
pages = {119--126},
year = 2007,
month = {sep},
}
[rodion-sigse200709]: as a page
開発履歴中のリファクタリング操作を識別することはソフトウェア理解に有用である.本研究では,履歴中の二つのリビジョン間で行われたリファクタリング操作列を特定する手法を提案する.提案手法では,プログラムを状態,リファクタリング操作を状態遷移とみなし,旧リビジョンから新リビジョンへ至るリファクタリング操作列を探索する.探索の際には,現在の状態と目標状態とのプログラム間の差分を求め,リファクタリング操作の選択と,目標状態までの距離の見積もりに用いる.提案手法では,同時に行われた関連する複数のリファクタリングも抽出することができる.リファクタリング操作列の探索を行うツールを実装し,適用実験を行うことで,提案手法の有用性を確認した.
Extracting refactorings from the development history is useful for software understanding. This paper proposes a technique to identify refactorings performed between two revisions using a search algorithm. In this technique, we consider a program as a state and refactoring as a transition, and then search for refactorings that reach from the initial state to the final state. In searching, we calculate the difference between the current state and the final state for choosing the next refactoring to apply and also estimate the heuristic distance to the final state. By using this tecnique we can detect related refactorings performed at the same time. Finally, we implemented a tool and evaluated its effectiveness.
@article{tsuda-sigse200703,
author = {津田 泰幸 and 林 晋平 and 佐伯 元司},
title = {探索手法を用いたリファクタリング情報の抽出},
journal = {情報処理学会研究報告},
volume = 2007,
number = 33,
pages = {135--142},
year = 2007,
month = {mar},
}
[tsuda-sigse200703]: as a page
@inproceedings{hayashi-lkr2007,
author = {Shinpei Hayashi},
title = {A Technique for Supporting Refactoring Activities by Using Software Repositories},
booktitle = {Proceedings of the Symposium on Large-scale Knowledge Resources},
pages = {147--150},
year = 2007,
month = {mar},
}
[hayashi-lkr2007]: as a page
Refactoring is one of the promising techniques for improving program design by means of program transformation with preserving behavior, and is widely applied in practice. However, it is difficult for engineers to identify how and where to refactor programs, because proper knowledge and skills of a high order are required of them. In this paper, we propose the technique to instruct how and where to refactor a program by using a sequence of its modifications. We consider that the histories of program modifications reflect developers' intentions, and focusing on them allows us to provide suitable refactoring guides. Our technique can be automated by storing the correspondence of modification patterns to suitable refactoring operations. By implementing an automated supporting tool, we show its feasibility. The tool is implemented as a plug-in for Eclipse IDE. It selects refactoring operations by matching between a sequence of program modifications and modification patterns.
@article{hayashi-ieicet200604,
author = {Shinpei Hayashi and Motoshi Saeki and Masahito Kurihara},
title = {Supporting Refactoring Activities Using Histories of Program Modification},
journal = {IEICE Transactions on Information and Systems},
volume = {E89-D},
number = 4,
pages = {1403--1412},
year = 2006,
month = {apr},
}
[hayashi-ieicet200604]: as a page
In this poster, we discuss the need for collecting and analyzing program modification histories, sequences of fine-grained program editing operations. Then we introduce Eclipse plug-ins that can collect and analyze modification histories, and show its useful application technique that can suggest suitable refactoring opportunities to developers by analyzing histories.
@misc{hayashi-etx2006,
author = {Shinpei Hayashi and Motoshi Saeki},
title = {Eclipse Plug-ins for Collecting and Analyzing Program Modifications},
howpublished = {In Eclipse Technology eXchange Workshop},
year = 2006,
month = {oct},
}
[hayashi-etx2006]: as a page
本稿では,オープンソースソフトウェアリポジトリ中のソースコード片に注釈を付与することにより,ソフトウェア理解を支援するシステムを提案する.提案システムでは,コード片を指し示す識別子を介して注釈付与を行うことにより,注釈サービスと版管理リポジトリとの結合を緩めている.
In this paper, we propose the system which enhances comprehensions of open source software by allowing anybody to annotate snippets of source code. We use identifiers which uniquely locate snippets of source code allowing us to loosen dependency on external annotation services.
@inproceedings{hayashi-websde2006,
author = {林 晋平},
title = {ソースコードへの注釈付与による集合知を利用したソフトウェア理解},
booktitle = {Proceedings of the Workshop on Leveraging Web2.0 Technologies in Software Development Environments},
pages = {12--13},
year = 2006,
month = {sep},
}
[hayashi-websde2006]: as a page
リファクタリングの適用候補を特定することは,ソフトウェアの品質の向上につながるため有用である.筆者らはこれまでに,ソフトウェア開発環境に対して開発者が行った変更操作の履歴を利用して適用すべきリファクタリングを特定する手法について取り組んできた.しかし,この手法では識者が変更履歴の特徴をパターンとして事前に作成する必要があった.本稿では,リファクタリングの兆候となる変更履歴の特徴を既存のソフトウェアリポジトリから発見する手法の枠組みを提案する.提案手法では,まず開発者が過去に行った変更の履歴をソフトウェアリポジトリから抽出する.履歴は,プログラムの抽象構文木から差分を計算することにより編集スクリプトの列として求める.続いて,同リポジトリから過去に行われたリファクタリングの事例を取り出し,行われたリファクタリングに関連する構文要素の過去の変更履歴を調べることにより,変更履歴のパターンを作成する.本稿では,例として Jakarta Commons リポジトリに対して手法を適用し,その有用性について検討する.
It is effective to identify how and where to refactor programs because it improves the program design. We have so far proposed a technique to suggest refactoring opportunities by using a sequence of its modifications. However, the approach requires its users to describe the characteristic modification histories as modification patterns. In this paper, we propose a basic technique to discover characteristic modification histories from its repository, which in turn a technique to find refactoring opportunities. First, we extract modified snippets of code, or Edit Scripts, from a software repository. The Edit Scripts are prepared by calculating the difference of the Abstract Syntax Trees in the programs. We also extract past cases of refactorings, and then we create modification patterns by searching the modifications corresponding with cases of refactorings for frequented structures. We consider the effectiveness of our technique by applying it to Jakarta Commons software repository.
@article{hayashi-sigss200604,
author = {林 晋平 and 佐伯 元司},
title = {リファクタリング支援に用いる知識抽出のためのソフトウェアリポジトリの解析},
journal = {電子情報通信学会技術研究報告},
volume = 106,
number = 16,
pages = {1--6},
year = 2006,
month = {apr},
}
[hayashi-sigss200604]: as a page
既存プログラムの理解を助けるために,使用されているデザインパターンを動的解析を用いて検出する研究が行われている.検出が有効に行われるためには,動的解析の際にパターンが使用されている部分を実行する必要があり,これを満たすテストケースを作成することが求められる.そこで,本稿ではデザインパターン検出の際に行う動的解析に用いるテストケースの作成を効果的に行う手法を提案する.提案手法では,まずプログラムを静的解析し,パターン候補を識別する.そして,各候補に対して,あらかじめ用意しておいた雛型からテストケースを作成する.このとき,人手で行うメソッドの入力引数の決定を容易にするために支援情報を提示する.提案手法を実装し,動的解析を行う既存のデザインパターン検出ツールを用いて適用実験を行い,本手法の有効性を示した.
To help with understanding of existing programs, some researches to detect occurrences of design patterns via dynamic analysis has been carried out. In order to detect patterns via dynamic analysis, we need the test cases to execute the part where patterns are actually used. In this paper, we propose the technique for generating test cases effectively which are used for dynamic analysis of design pattern detection. First, to pick pattern candidates up, we analyze the program statically. Then, for each candidates, we generate test cases from the templates which are already defined. At this time, we decide arguments of methods by hands with provided support information. We show the effectiveness of our technique by implementing the supporting tool, and conducting experiments with the existing design pattern detection tool which performed dynamic analysis.
@article{rsakamoto-sigss200604,
author = {坂本 良太 and 林 晋平 and 佐伯 元司},
title = {デザインパターン検出のためのテストケース作成支援},
journal = {電子情報通信学会技術研究報告},
volume = 106,
number = 16,
pages = {7--12},
year = 2006,
month = {apr},
}
[rsakamoto-sigss200604]: as a page
2006年9月に東京にて第21回ソフトウェア工学の自動化国際会議(ASE2006)を開催および参加したので,取り上げられた主な内容を紹介する.会議の傾向として,モデル検査/記号実行に基づくプログラム解析/検証の取り組み,および,プログラム変更履歴からの特定の情報(例えばアスペクト)発掘の取り組みが多く見られ,両分野への取り組みの活発化を伺えた.会議には約 220 名の参加があり,国内外の研究者がソフトウェア工学自動化の最先端の取り組みについて議論し交流する良い機会となった.
This paper reports major topics of the 21st IEEE/ACM International Conference on Automated Software Engineering held at September 2006 in Tokyo. There were many presentations on program analysis/verification and pattern/aspect mining.
@article{washizaki-sigse200611,
author = {鷲崎 弘宜 and 久保 淳人 and 下滝 亜里 and 中川 博之 and 林 晋平 and 丸山 勝久 and 本位田 真一},
title = {第21回ソフトウェア工学の自動化国際会議({ASE2006})開催および参加報告},
journal = {情報処理学会研究報告},
volume = 2006,
number = 125,
pages = {81--88},
year = 2006,
month = {nov},
}
[washizaki-sigse200611]: as a page
リファクタリングはソフトウェアの外的振る舞いを保存したままその品質を改善するための有効な手法であるが,適切なリファクタリングの選択やその適用箇所の決定には高度な知識や経験を必要とし,簡単ではない.本稿では,プログラムの変更履歴を用いてそれらを示す手法を提案する.システムが変更履歴を参照することにより,開発者の意図を汲んだ提示を迅速かつ自動的に行うことが可能になる.本稿では,一連の変更と変更パターンとをパターンマッチさせることにより提示すべきリファクタリングを決定するシステムを作成し,提案手法の実現可能性を示した.
Refactoring is one of the promising techniques for improving software design by means of program transformation with preserving behavior, and is widely taken into practice. But it isn't easy that identifying where to apply which refactoring because it requires proper knowledges and experiences. In this paper, we propose the technique to suggesting refactoring using a sequence of program modifications. Our technique could suggest which refactoring is suitable by considering developer's intentions. It is done by efficiently and automatically. We illustrate the feasibility of our approach with the development of system which selects refactoring by matching between a sequence of modifications and modification patterns.
@article{hayashi-sigss200408,
author = {林 晋平 and 栗原 正仁},
title = {プログラムの変更履歴に基づくリファクタリング支援},
journal = {電子情報通信学会技術研究報告},
volume = 104,
number = 242,
pages = {13--18},
year = 2004,
month = {aug},
}
[hayashi-sigss200408]: as a page