Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] KMeans clustering #29

Closed
wants to merge 1 commit into from

Conversation

novoselrok
Copy link
Collaborator

Pull request for KMeans clustering (#22).

  • Basic algorithm implementation
  • Tests
  • Documentation
  • KMeans++
  • Optimization using triangle inequality

@codecov
Copy link

codecov bot commented Jun 19, 2018

Codecov Report

Merging #29 into master will decrease coverage by 9.36%.
The diff coverage is 0%.

@@            Coverage Diff             @@
##           master      #29      +/-   ##
==========================================
- Coverage   85.18%   75.82%   -9.37%     
==========================================
  Files          26       30       +4     
  Lines         243      273      +30     
  Branches        3       10       +7     
==========================================
  Hits          207      207              
- Misses         36       66      +30

@inejc inejc self-requested a review June 20, 2018 12:50
@inejc inejc added enhancement New feature or request work in progress labels Jun 20, 2018
@inejc
Copy link
Member

inejc commented Jun 20, 2018

Thanks, @novoselrok! Will take a look at it as soon as I find the time.

Copy link
Member

@inejc inejc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at scikit-learn implementations only a subset of clustering algorithms are able to predict clusters on new data. Our base class should take that into account, we could have something like this: Clusterer extends Estimator and then another subclass for PredictiveClusterer extends Clusterer? What are your thoughts on this? If KMeans is the only implementation that requires a seed value to be passed to fit, should we find another way to get it there?


import com.picnicml.doddlemodel.data.{Features, Target}

abstract class Clusterer[A <: Clusterer[A]] extends Estimator {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clusterer currently only has a single publicly exposed function: def predict(x: Features): Target. Additionally, I don't think all clustering algorithms will expose predict, e.g. DBSCAN doesn't. On the other hand, all of them probably need fit exposed?


import scala.util.Random

trait RandomizableClusterer[A <: RandomizableClusterer[A]] extends Clusterer[A] {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As already mentioned, fit here shouldn't be protected. On the other hand that introduces another problem: API (public functions defined in abstract classes and traits) should be in the base package. If you take a look at the linear.LinearModel and other traits there, none of them define any public interface, they only encapsulate functionality that is common to all linear models. We should identify what is common to all clustering estimators and expose that in the form of a base class(es).

@inejc inejc added the help wanted Extra attention is needed label Jan 24, 2019
@inejc inejc closed this Sep 26, 2019
Kanban Board for doddle-model automation moved this from To do to Done Sep 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Development

Successfully merging this pull request may close these issues.

None yet

2 participants