Find the identical rows in a matrix [duplicate]How to efficiently find positions of duplicates?How to find rows that have maximum value?How to do equality check of a large matrix and get the corresponding index position?List manipulation: Dropping first or last row or column of a matrixMake a vector of sums of matrix rowsHow to operate on spans of rows in a matrix?Efficiently select the smallest magnitude element from each column of a matrixMatrix expansion and reorganisationMatrix using For LoopChanging the position of rows and columns in a matrix

Did Feynman cite a fallacy about only circles having the same width in all directions as a reason for the Challenger disaster?

Can a character dodge an attack that beats their Armor Class?

Google can't fetch large sitemap with 50k URLs, nor will browsers render it

Sum in bash outside while read line

Why is coffee provided during big chess events when it contains a banned substance?

Big Bracket for equations

What is this dial on my old film camera for?

SSD or HDD for server

Can you decide not to sneak into a room after seeing your roll?

Does Australia produce unique 'specialty steel'?

Is it realistic that an advanced species isn't good at war?

My professor says my digit summing code is flawed. Is he right?

Novel set in the future, children cannot change the class they are born into, one class is made uneducated by associating books with pain

UK PM is taking his proposal to EU but has not proposed to his own parliament - can he legally bypass the UK parliament?

Are there any rules around when something can be described as "based on a true story"?

How to execute a project with two resources where you need three resources?

How much income am I getting by renting my house?

Employer says he needs to delay payment by 3 months due to bureaucracy

String Operation to Split on Punctuation

Proofreading a novel: is it okay to use a question mark with an exclamation mark - "?!"

Count number of different name in a file

What should I upgrade first?

Why are KDFs slow? Is using a KDF more secure than using the original secret?

How can my hammerspace safely "decompress"?



Find the identical rows in a matrix [duplicate]


How to efficiently find positions of duplicates?How to find rows that have maximum value?How to do equality check of a large matrix and get the corresponding index position?List manipulation: Dropping first or last row or column of a matrixMake a vector of sums of matrix rowsHow to operate on spans of rows in a matrix?Efficiently select the smallest magnitude element from each column of a matrixMatrix expansion and reorganisationMatrix using For LoopChanging the position of rows and columns in a matrix






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;

.everyonelovesstackoverflowposition:absolute;height:1px;width:1px;opacity:0;top:0;left:0;pointer-events:none;








6














$begingroup$



This question already has an answer here:



  • How to efficiently find positions of duplicates?

    8 answers



Suppose I have the following matrix:



M = 
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;

TableForm[M, TableHeadings -> S1, S2, S3, S4, S5, S6, S7, S8]


matrix



In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.










share|improve this question











$endgroup$






marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.














  • 3




    $begingroup$
    Try Values[PositionIndex[M]]
    $endgroup$
    – Coolwater
    Apr 25 at 21:39










  • $begingroup$
    @Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
    $endgroup$
    – OkkesDulgerci
    Apr 25 at 22:58










  • $begingroup$
    @Coolwater IMHO, the best answer is lacking so far. Please consider posting PositionIndex as possible solution.
    $endgroup$
    – Henrik Schumacher
    Apr 26 at 12:08

















6














$begingroup$



This question already has an answer here:



  • How to efficiently find positions of duplicates?

    8 answers



Suppose I have the following matrix:



M = 
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;

TableForm[M, TableHeadings -> S1, S2, S3, S4, S5, S6, S7, S8]


matrix



In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.










share|improve this question











$endgroup$






marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.














  • 3




    $begingroup$
    Try Values[PositionIndex[M]]
    $endgroup$
    – Coolwater
    Apr 25 at 21:39










  • $begingroup$
    @Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
    $endgroup$
    – OkkesDulgerci
    Apr 25 at 22:58










  • $begingroup$
    @Coolwater IMHO, the best answer is lacking so far. Please consider posting PositionIndex as possible solution.
    $endgroup$
    – Henrik Schumacher
    Apr 26 at 12:08













6












6








6


2



$begingroup$



This question already has an answer here:



  • How to efficiently find positions of duplicates?

    8 answers



Suppose I have the following matrix:



M = 
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;

TableForm[M, TableHeadings -> S1, S2, S3, S4, S5, S6, S7, S8]


matrix



In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.










share|improve this question











$endgroup$





This question already has an answer here:



  • How to efficiently find positions of duplicates?

    8 answers



Suppose I have the following matrix:



M = 
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;

TableForm[M, TableHeadings -> S1, S2, S3, S4, S5, S6, S7, S8]


matrix



In this case, it turns out that rows (S1, S8), (S2, S3, S4), (S5, S6, S7) have equal element values in identical column positions. I have a 1000 x 1000 matrix to examine and would appreciate any assistance in coding this problem.





This question already has an answer here:



  • How to efficiently find positions of duplicates?

    8 answers







list-manipulation matrix






share|improve this question















share|improve this question













share|improve this question




share|improve this question



share|improve this question








edited Apr 25 at 20:03









m_goldberg

92k8 gold badges75 silver badges210 bronze badges




92k8 gold badges75 silver badges210 bronze badges










asked Apr 25 at 19:15









PRGPRG

1979 bronze badges




1979 bronze badges





marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.











marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.









marked as duplicate by Michael E2, m_goldberg, happy fish, bbgodfrey, C. E. May 1 at 13:07


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.









  • 3




    $begingroup$
    Try Values[PositionIndex[M]]
    $endgroup$
    – Coolwater
    Apr 25 at 21:39










  • $begingroup$
    @Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
    $endgroup$
    – OkkesDulgerci
    Apr 25 at 22:58










  • $begingroup$
    @Coolwater IMHO, the best answer is lacking so far. Please consider posting PositionIndex as possible solution.
    $endgroup$
    – Henrik Schumacher
    Apr 26 at 12:08












  • 3




    $begingroup$
    Try Values[PositionIndex[M]]
    $endgroup$
    – Coolwater
    Apr 25 at 21:39










  • $begingroup$
    @Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
    $endgroup$
    – OkkesDulgerci
    Apr 25 at 22:58










  • $begingroup$
    @Coolwater IMHO, the best answer is lacking so far. Please consider posting PositionIndex as possible solution.
    $endgroup$
    – Henrik Schumacher
    Apr 26 at 12:08







3




3




$begingroup$
Try Values[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39




$begingroup$
Try Values[PositionIndex[M]]
$endgroup$
– Coolwater
Apr 25 at 21:39












$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– OkkesDulgerci
Apr 25 at 22:58




$begingroup$
@Coolwater If there is a unique row, your method will fail. At least one needs to delete if list has length 1
$endgroup$
– OkkesDulgerci
Apr 25 at 22:58












$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider posting PositionIndex as possible solution.
$endgroup$
– Henrik Schumacher
Apr 26 at 12:08




$begingroup$
@Coolwater IMHO, the best answer is lacking so far. Please consider posting PositionIndex as possible solution.
$endgroup$
– Henrik Schumacher
Apr 26 at 12:08










4 Answers
4






active

oldest

votes


















2
















$begingroup$

I'd use GroupBy.



First the names of the rows: can be anything you like, for example



rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]



S1, S2, S3, S4, S5, S6, S7, S8




Next the grouping:



groups = GroupBy[Thread[rownames -> M], Last -> First]



<|0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 -> S1, S8,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 -> S2, S3, S4,
0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0 -> S5, S6, S7|>




If all you need are the names:



Values[groups]



S1, S8, S2, S3, S4, S5, S6, S7







share|improve this answer










$endgroup$






















    8
















    $begingroup$

    idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]



    1, 8, 2, 3, 4, 5, 6, 7




    In order to obtain the labels of the rows, you may use the following:



    labels = S1, S2, S3, S4, S5, S6, S7, S8;
    Map[labels[[#]] &, idx, 2]



    S1, S8, S2, S3, S4, S5, S6, S7







    share|improve this answer












    $endgroup$














    • $begingroup$
      Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
      $endgroup$
      – PRG
      Apr 25 at 19:26










    • $begingroup$
      MANY THANKS, HENRIK!
      $endgroup$
      – PRG
      Apr 25 at 20:33










    • $begingroup$
      YOU'RE WELCOME, PRG! =D
      $endgroup$
      – Henrik Schumacher
      Apr 25 at 20:34


















    6
















    $begingroup$

    The function positionDuplicates [] from How to efficiently find positions of duplicates? does the job, faster than Nearest.



    (* Henrik's method *)
    posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]

    SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
    foo = Nest[RandomInteger[1, 1000, 1000] # &, 1, 9];

    d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
    (* 0.017, Null *)

    d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
    (* 0.060, Null *)

    d1 == d2
    (* True *)

    d1
    (*
    25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777,
    173, 516, 544, 816, 610, 720
    *)





    share|improve this answer










    $endgroup$










    • 1




      $begingroup$
      Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1] is faster than positionDuplicates []
      $endgroup$
      – OkkesDulgerci
      Apr 26 at 1:41










    • $begingroup$
      @OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the PositionIndex[] solutoin.
      $endgroup$
      – Michael E2
      Apr 26 at 1:44







    • 3




      $begingroup$
      @OkkesDulgerci It's interesting that PositionIndex[] outperforms positionDuplicates[] on a list of lists, because it is still much slower on a list of integers.
      $endgroup$
      – Michael E2
      Apr 26 at 1:53


















    3
















    $begingroup$

    While this question repeats a previous query about finding DuplicatePositions, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions (collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates (the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap" Method option (from Carl Woll), the accepted groupBy answer (by Roman) and the nearest answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering" Method in DuplicatePositions.



    duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]


    which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):



    enter image description here



    Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy seems to recognize order for ragged vectors unlike GatherBy.



    The code for the above output is below



    SetAttributes[benchmark, HoldAll];

    benchmark[functions_, opts : OptionsPattern[]] :=
    Function[input, benchmark[functions, input, opts], HoldAll];

    benchmark[functions_, input_, OptionsPattern[]] := Module[ToString[fn] -> RepeatedTiming@fn@x,
    SeedRandom@0;
    timesOutputs = Through[(tm /@ functions)@input];
    times =
    SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
    If[OptionValue@"CheckOutputs",
    Labeled[times,
    Row[ToString@Unevaluated@input, Spacer@80,
    If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
    Style["[Checkmark]", Green, 20], Style["x", Red, 20]]],
    Top], times]
    ];

    Options[benchmark] = "CheckOutputs" -> True;

    Options[DuplicatePositions] = Method -> Automatic;

    DuplicatePositions[ls_, OptionsPattern[]] :=
    With[method = OptionValue[Method],
    Switch[method,
    "UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
    "UsePositionIndex", Values@PositionIndex@ls,
    "UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
    "UseGatherByLocalMap", Module[func, func /: Map[func, _] := ls;
    GatherBy[Range@Length@ls, func]],
    Automatic, Which[
    ArrayQ[ls, 1, NumericQ],
    DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
    ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
    MatchQ[_?IntegerQ .. ..]@ls,
    DuplicatePositionsBy[ls, FromDigits],
    True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
    ]]];

    DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern[]] :=
    DuplicatePositions[fn /@ ls, opts];

    tag = Map["S" <> ToString@# &, #, -1] &;
    positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
    groupBy[M_] := With[
    rownames = Array["S" <> ToString[#] &, Length[M]],
    Values@GroupBy[Thread[rownames -> M], Last -> First]];
    nearest[M_] :=
    DeleteDuplicates[
    Sort /@ Nearest[M -> Automatic, M, [Infinity], 0]];
    n = 10^4;
    binaryVectors50k =
    IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
    fns =
    groupBy,
    (nearest@# // tag) &,
    (DuplicatePositions@# // tag) &,
    (DuplicatePositionsBy[#, FromDigits[#, 2] &,
    Method -> "UseGatherByLocalMap"] // tag) &,
    (positionDuplicates@# // tag) &
    ;
    benchmark[fns]@binaryVectors50k
    n = 10^3;
    binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
    fns =
    groupBy,
    (DuplicatePositions@# // tag) &,
    (DuplicatePositionsBy[#, FromDigits[#, 2] &,
    Method -> "UseGatherByLocalMap"] // tag) &,
    (positionDuplicates@# // tag) &
    ;
    benchmark[fns]@binaryVectorsRagged5k

    n = 10^4;
    binaryVectorsSparse10k := RandomInteger[1, n, n];
    fns =
    (DuplicatePositions@# // tag) &,
    (positionDuplicates@# // tag) &,
    (DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
    groupBy;
    benchmark[fns]@binaryVectorsSparse10k





    share|improve this answer












    $endgroup$






















      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2
















      $begingroup$

      I'd use GroupBy.



      First the names of the rows: can be anything you like, for example



      rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]



      S1, S2, S3, S4, S5, S6, S7, S8




      Next the grouping:



      groups = GroupBy[Thread[rownames -> M], Last -> First]



      <|0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 -> S1, S8,
      0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 -> S2, S3, S4,
      0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0 -> S5, S6, S7|>




      If all you need are the names:



      Values[groups]



      S1, S8, S2, S3, S4, S5, S6, S7







      share|improve this answer










      $endgroup$



















        2
















        $begingroup$

        I'd use GroupBy.



        First the names of the rows: can be anything you like, for example



        rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]



        S1, S2, S3, S4, S5, S6, S7, S8




        Next the grouping:



        groups = GroupBy[Thread[rownames -> M], Last -> First]



        <|0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 -> S1, S8,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 -> S2, S3, S4,
        0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0 -> S5, S6, S7|>




        If all you need are the names:



        Values[groups]



        S1, S8, S2, S3, S4, S5, S6, S7







        share|improve this answer










        $endgroup$

















          2














          2










          2







          $begingroup$

          I'd use GroupBy.



          First the names of the rows: can be anything you like, for example



          rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]



          S1, S2, S3, S4, S5, S6, S7, S8




          Next the grouping:



          groups = GroupBy[Thread[rownames -> M], Last -> First]



          <|0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 -> S1, S8,
          0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 -> S2, S3, S4,
          0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0 -> S5, S6, S7|>




          If all you need are the names:



          Values[groups]



          S1, S8, S2, S3, S4, S5, S6, S7







          share|improve this answer










          $endgroup$



          I'd use GroupBy.



          First the names of the rows: can be anything you like, for example



          rownames = Array[ToExpression["S" <> ToString[#]] &, Length[M]]



          S1, S2, S3, S4, S5, S6, S7, S8




          Next the grouping:



          groups = GroupBy[Thread[rownames -> M], Last -> First]



          <|0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 -> S1, S8,
          0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 -> S2, S3, S4,
          0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0 -> S5, S6, S7|>




          If all you need are the names:



          Values[groups]



          S1, S8, S2, S3, S4, S5, S6, S7








          share|improve this answer













          share|improve this answer




          share|improve this answer



          share|improve this answer










          answered Apr 26 at 11:45









          RomanRoman

          15.9k1 gold badge22 silver badges55 bronze badges




          15.9k1 gold badge22 silver badges55 bronze badges


























              8
















              $begingroup$

              idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]



              1, 8, 2, 3, 4, 5, 6, 7




              In order to obtain the labels of the rows, you may use the following:



              labels = S1, S2, S3, S4, S5, S6, S7, S8;
              Map[labels[[#]] &, idx, 2]



              S1, S8, S2, S3, S4, S5, S6, S7







              share|improve this answer












              $endgroup$














              • $begingroup$
                Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
                $endgroup$
                – PRG
                Apr 25 at 19:26










              • $begingroup$
                MANY THANKS, HENRIK!
                $endgroup$
                – PRG
                Apr 25 at 20:33










              • $begingroup$
                YOU'RE WELCOME, PRG! =D
                $endgroup$
                – Henrik Schumacher
                Apr 25 at 20:34















              8
















              $begingroup$

              idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]



              1, 8, 2, 3, 4, 5, 6, 7




              In order to obtain the labels of the rows, you may use the following:



              labels = S1, S2, S3, S4, S5, S6, S7, S8;
              Map[labels[[#]] &, idx, 2]



              S1, S8, S2, S3, S4, S5, S6, S7







              share|improve this answer












              $endgroup$














              • $begingroup$
                Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
                $endgroup$
                – PRG
                Apr 25 at 19:26










              • $begingroup$
                MANY THANKS, HENRIK!
                $endgroup$
                – PRG
                Apr 25 at 20:33










              • $begingroup$
                YOU'RE WELCOME, PRG! =D
                $endgroup$
                – Henrik Schumacher
                Apr 25 at 20:34













              8














              8










              8







              $begingroup$

              idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]



              1, 8, 2, 3, 4, 5, 6, 7




              In order to obtain the labels of the rows, you may use the following:



              labels = S1, S2, S3, S4, S5, S6, S7, S8;
              Map[labels[[#]] &, idx, 2]



              S1, S8, S2, S3, S4, S5, S6, S7







              share|improve this answer












              $endgroup$



              idx = DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]



              1, 8, 2, 3, 4, 5, 6, 7




              In order to obtain the labels of the rows, you may use the following:



              labels = S1, S2, S3, S4, S5, S6, S7, S8;
              Map[labels[[#]] &, idx, 2]



              S1, S8, S2, S3, S4, S5, S6, S7








              share|improve this answer















              share|improve this answer




              share|improve this answer



              share|improve this answer








              edited Apr 25 at 20:04

























              answered Apr 25 at 19:22









              Henrik SchumacherHenrik Schumacher

              68.8k5 gold badges99 silver badges192 bronze badges




              68.8k5 gold badges99 silver badges192 bronze badges














              • $begingroup$
                Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
                $endgroup$
                – PRG
                Apr 25 at 19:26










              • $begingroup$
                MANY THANKS, HENRIK!
                $endgroup$
                – PRG
                Apr 25 at 20:33










              • $begingroup$
                YOU'RE WELCOME, PRG! =D
                $endgroup$
                – Henrik Schumacher
                Apr 25 at 20:34
















              • $begingroup$
                Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
                $endgroup$
                – PRG
                Apr 25 at 19:26










              • $begingroup$
                MANY THANKS, HENRIK!
                $endgroup$
                – PRG
                Apr 25 at 20:33










              • $begingroup$
                YOU'RE WELCOME, PRG! =D
                $endgroup$
                – Henrik Schumacher
                Apr 25 at 20:34















              $begingroup$
              Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
              $endgroup$
              – PRG
              Apr 25 at 19:26




              $begingroup$
              Henrik: Can I add the S in front of the result; e.g., (S1,S8),(S3,S4),(S5,S6,S7)?
              $endgroup$
              – PRG
              Apr 25 at 19:26












              $begingroup$
              MANY THANKS, HENRIK!
              $endgroup$
              – PRG
              Apr 25 at 20:33




              $begingroup$
              MANY THANKS, HENRIK!
              $endgroup$
              – PRG
              Apr 25 at 20:33












              $begingroup$
              YOU'RE WELCOME, PRG! =D
              $endgroup$
              – Henrik Schumacher
              Apr 25 at 20:34




              $begingroup$
              YOU'RE WELCOME, PRG! =D
              $endgroup$
              – Henrik Schumacher
              Apr 25 at 20:34











              6
















              $begingroup$

              The function positionDuplicates [] from How to efficiently find positions of duplicates? does the job, faster than Nearest.



              (* Henrik's method *)
              posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]

              SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
              foo = Nest[RandomInteger[1, 1000, 1000] # &, 1, 9];

              d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.017, Null *)

              d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.060, Null *)

              d1 == d2
              (* True *)

              d1
              (*
              25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777,
              173, 516, 544, 816, 610, 720
              *)





              share|improve this answer










              $endgroup$










              • 1




                $begingroup$
                Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1] is faster than positionDuplicates []
                $endgroup$
                – OkkesDulgerci
                Apr 26 at 1:41










              • $begingroup$
                @OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the PositionIndex[] solutoin.
                $endgroup$
                – Michael E2
                Apr 26 at 1:44







              • 3




                $begingroup$
                @OkkesDulgerci It's interesting that PositionIndex[] outperforms positionDuplicates[] on a list of lists, because it is still much slower on a list of integers.
                $endgroup$
                – Michael E2
                Apr 26 at 1:53















              6
















              $begingroup$

              The function positionDuplicates [] from How to efficiently find positions of duplicates? does the job, faster than Nearest.



              (* Henrik's method *)
              posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]

              SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
              foo = Nest[RandomInteger[1, 1000, 1000] # &, 1, 9];

              d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.017, Null *)

              d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.060, Null *)

              d1 == d2
              (* True *)

              d1
              (*
              25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777,
              173, 516, 544, 816, 610, 720
              *)





              share|improve this answer










              $endgroup$










              • 1




                $begingroup$
                Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1] is faster than positionDuplicates []
                $endgroup$
                – OkkesDulgerci
                Apr 26 at 1:41










              • $begingroup$
                @OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the PositionIndex[] solutoin.
                $endgroup$
                – Michael E2
                Apr 26 at 1:44







              • 3




                $begingroup$
                @OkkesDulgerci It's interesting that PositionIndex[] outperforms positionDuplicates[] on a list of lists, because it is still much slower on a list of integers.
                $endgroup$
                – Michael E2
                Apr 26 at 1:53













              6














              6










              6







              $begingroup$

              The function positionDuplicates [] from How to efficiently find positions of duplicates? does the job, faster than Nearest.



              (* Henrik's method *)
              posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]

              SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
              foo = Nest[RandomInteger[1, 1000, 1000] # &, 1, 9];

              d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.017, Null *)

              d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.060, Null *)

              d1 == d2
              (* True *)

              d1
              (*
              25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777,
              173, 516, 544, 816, 610, 720
              *)





              share|improve this answer










              $endgroup$



              The function positionDuplicates [] from How to efficiently find positions of duplicates? does the job, faster than Nearest.



              (* Henrik's method *)
              posDupes[M_] := DeleteDuplicates[Sort /@ Nearest[M -> Automatic, M, ∞, 0]]

              SeedRandom[0]; (* to make a reproducible 1000 x 1000 matrix *)
              foo = Nest[RandomInteger[1, 1000, 1000] # &, 1, 9];

              d1 = Cases[positionDuplicates[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.017, Null *)

              d2 = Cases[posDupes[foo], dupe_ /; Length[dupe] > 1]; // RepeatedTiming
              (* 0.060, Null *)

              d1 == d2
              (* True *)

              d1
              (*
              25, 75, 291, 355, 356, 425, 475, 518, 547, 668, 670, 750, 777,
              173, 516, 544, 816, 610, 720
              *)






              share|improve this answer













              share|improve this answer




              share|improve this answer



              share|improve this answer










              answered Apr 26 at 0:28









              Michael E2Michael E2

              159k13 gold badges219 silver badges518 bronze badges




              159k13 gold badges219 silver badges518 bronze badges










              • 1




                $begingroup$
                Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1] is faster than positionDuplicates []
                $endgroup$
                – OkkesDulgerci
                Apr 26 at 1:41










              • $begingroup$
                @OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the PositionIndex[] solutoin.
                $endgroup$
                – Michael E2
                Apr 26 at 1:44







              • 3




                $begingroup$
                @OkkesDulgerci It's interesting that PositionIndex[] outperforms positionDuplicates[] on a list of lists, because it is still much slower on a list of integers.
                $endgroup$
                – Michael E2
                Apr 26 at 1:53












              • 1




                $begingroup$
                Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1] is faster than positionDuplicates []
                $endgroup$
                – OkkesDulgerci
                Apr 26 at 1:41










              • $begingroup$
                @OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the PositionIndex[] solutoin.
                $endgroup$
                – Michael E2
                Apr 26 at 1:44







              • 3




                $begingroup$
                @OkkesDulgerci It's interesting that PositionIndex[] outperforms positionDuplicates[] on a list of lists, because it is still much slower on a list of integers.
                $endgroup$
                – Michael E2
                Apr 26 at 1:53







              1




              1




              $begingroup$
              Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1] is faster than positionDuplicates []
              $endgroup$
              – OkkesDulgerci
              Apr 26 at 1:41




              $begingroup$
              Cases[Values[PositionIndex[M]], dupe_ /; Length[dupe] > 1] is faster than positionDuplicates []
              $endgroup$
              – OkkesDulgerci
              Apr 26 at 1:41












              $begingroup$
              @OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the PositionIndex[] solutoin.
              $endgroup$
              – Michael E2
              Apr 26 at 1:44





              $begingroup$
              @OkkesDulgerci Yes, it is for me, too, in V12. My main point is that the solution to this question has been given in another Q&A. See this answer for the PositionIndex[] solutoin.
              $endgroup$
              – Michael E2
              Apr 26 at 1:44





              3




              3




              $begingroup$
              @OkkesDulgerci It's interesting that PositionIndex[] outperforms positionDuplicates[] on a list of lists, because it is still much slower on a list of integers.
              $endgroup$
              – Michael E2
              Apr 26 at 1:53




              $begingroup$
              @OkkesDulgerci It's interesting that PositionIndex[] outperforms positionDuplicates[] on a list of lists, because it is still much slower on a list of integers.
              $endgroup$
              – Michael E2
              Apr 26 at 1:53











              3
















              $begingroup$

              While this question repeats a previous query about finding DuplicatePositions, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions (collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates (the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap" Method option (from Carl Woll), the accepted groupBy answer (by Roman) and the nearest answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering" Method in DuplicatePositions.



              duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]


              which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):



              enter image description here



              Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy seems to recognize order for ragged vectors unlike GatherBy.



              The code for the above output is below



              SetAttributes[benchmark, HoldAll];

              benchmark[functions_, opts : OptionsPattern[]] :=
              Function[input, benchmark[functions, input, opts], HoldAll];

              benchmark[functions_, input_, OptionsPattern[]] := Module[ToString[fn] -> RepeatedTiming@fn@x,
              SeedRandom@0;
              timesOutputs = Through[(tm /@ functions)@input];
              times =
              SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
              If[OptionValue@"CheckOutputs",
              Labeled[times,
              Row[ToString@Unevaluated@input, Spacer@80,
              If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
              Style["[Checkmark]", Green, 20], Style["x", Red, 20]]],
              Top], times]
              ];

              Options[benchmark] = "CheckOutputs" -> True;

              Options[DuplicatePositions] = Method -> Automatic;

              DuplicatePositions[ls_, OptionsPattern[]] :=
              With[method = OptionValue[Method],
              Switch[method,
              "UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
              "UsePositionIndex", Values@PositionIndex@ls,
              "UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
              "UseGatherByLocalMap", Module[func, func /: Map[func, _] := ls;
              GatherBy[Range@Length@ls, func]],
              Automatic, Which[
              ArrayQ[ls, 1, NumericQ],
              DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
              ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
              MatchQ[_?IntegerQ .. ..]@ls,
              DuplicatePositionsBy[ls, FromDigits],
              True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
              ]]];

              DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern[]] :=
              DuplicatePositions[fn /@ ls, opts];

              tag = Map["S" <> ToString@# &, #, -1] &;
              positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
              groupBy[M_] := With[
              rownames = Array["S" <> ToString[#] &, Length[M]],
              Values@GroupBy[Thread[rownames -> M], Last -> First]];
              nearest[M_] :=
              DeleteDuplicates[
              Sort /@ Nearest[M -> Automatic, M, [Infinity], 0]];
              n = 10^4;
              binaryVectors50k =
              IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
              fns =
              groupBy,
              (nearest@# // tag) &,
              (DuplicatePositions@# // tag) &,
              (DuplicatePositionsBy[#, FromDigits[#, 2] &,
              Method -> "UseGatherByLocalMap"] // tag) &,
              (positionDuplicates@# // tag) &
              ;
              benchmark[fns]@binaryVectors50k
              n = 10^3;
              binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
              fns =
              groupBy,
              (DuplicatePositions@# // tag) &,
              (DuplicatePositionsBy[#, FromDigits[#, 2] &,
              Method -> "UseGatherByLocalMap"] // tag) &,
              (positionDuplicates@# // tag) &
              ;
              benchmark[fns]@binaryVectorsRagged5k

              n = 10^4;
              binaryVectorsSparse10k := RandomInteger[1, n, n];
              fns =
              (DuplicatePositions@# // tag) &,
              (positionDuplicates@# // tag) &,
              (DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
              groupBy;
              benchmark[fns]@binaryVectorsSparse10k





              share|improve this answer












              $endgroup$



















                3
















                $begingroup$

                While this question repeats a previous query about finding DuplicatePositions, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions (collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates (the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap" Method option (from Carl Woll), the accepted groupBy answer (by Roman) and the nearest answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering" Method in DuplicatePositions.



                duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]


                which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):



                enter image description here



                Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy seems to recognize order for ragged vectors unlike GatherBy.



                The code for the above output is below



                SetAttributes[benchmark, HoldAll];

                benchmark[functions_, opts : OptionsPattern[]] :=
                Function[input, benchmark[functions, input, opts], HoldAll];

                benchmark[functions_, input_, OptionsPattern[]] := Module[ToString[fn] -> RepeatedTiming@fn@x,
                SeedRandom@0;
                timesOutputs = Through[(tm /@ functions)@input];
                times =
                SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
                If[OptionValue@"CheckOutputs",
                Labeled[times,
                Row[ToString@Unevaluated@input, Spacer@80,
                If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
                Style["[Checkmark]", Green, 20], Style["x", Red, 20]]],
                Top], times]
                ];

                Options[benchmark] = "CheckOutputs" -> True;

                Options[DuplicatePositions] = Method -> Automatic;

                DuplicatePositions[ls_, OptionsPattern[]] :=
                With[method = OptionValue[Method],
                Switch[method,
                "UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
                "UsePositionIndex", Values@PositionIndex@ls,
                "UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
                "UseGatherByLocalMap", Module[func, func /: Map[func, _] := ls;
                GatherBy[Range@Length@ls, func]],
                Automatic, Which[
                ArrayQ[ls, 1, NumericQ],
                DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
                ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
                MatchQ[_?IntegerQ .. ..]@ls,
                DuplicatePositionsBy[ls, FromDigits],
                True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
                ]]];

                DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern[]] :=
                DuplicatePositions[fn /@ ls, opts];

                tag = Map["S" <> ToString@# &, #, -1] &;
                positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
                groupBy[M_] := With[
                rownames = Array["S" <> ToString[#] &, Length[M]],
                Values@GroupBy[Thread[rownames -> M], Last -> First]];
                nearest[M_] :=
                DeleteDuplicates[
                Sort /@ Nearest[M -> Automatic, M, [Infinity], 0]];
                n = 10^4;
                binaryVectors50k =
                IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
                fns =
                groupBy,
                (nearest@# // tag) &,
                (DuplicatePositions@# // tag) &,
                (DuplicatePositionsBy[#, FromDigits[#, 2] &,
                Method -> "UseGatherByLocalMap"] // tag) &,
                (positionDuplicates@# // tag) &
                ;
                benchmark[fns]@binaryVectors50k
                n = 10^3;
                binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
                fns =
                groupBy,
                (DuplicatePositions@# // tag) &,
                (DuplicatePositionsBy[#, FromDigits[#, 2] &,
                Method -> "UseGatherByLocalMap"] // tag) &,
                (positionDuplicates@# // tag) &
                ;
                benchmark[fns]@binaryVectorsRagged5k

                n = 10^4;
                binaryVectorsSparse10k := RandomInteger[1, n, n];
                fns =
                (DuplicatePositions@# // tag) &,
                (positionDuplicates@# // tag) &,
                (DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
                groupBy;
                benchmark[fns]@binaryVectorsSparse10k





                share|improve this answer












                $endgroup$

















                  3














                  3










                  3







                  $begingroup$

                  While this question repeats a previous query about finding DuplicatePositions, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions (collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates (the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap" Method option (from Carl Woll), the accepted groupBy answer (by Roman) and the nearest answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering" Method in DuplicatePositions.



                  duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]


                  which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):



                  enter image description here



                  Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy seems to recognize order for ragged vectors unlike GatherBy.



                  The code for the above output is below



                  SetAttributes[benchmark, HoldAll];

                  benchmark[functions_, opts : OptionsPattern[]] :=
                  Function[input, benchmark[functions, input, opts], HoldAll];

                  benchmark[functions_, input_, OptionsPattern[]] := Module[ToString[fn] -> RepeatedTiming@fn@x,
                  SeedRandom@0;
                  timesOutputs = Through[(tm /@ functions)@input];
                  times =
                  SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
                  If[OptionValue@"CheckOutputs",
                  Labeled[times,
                  Row[ToString@Unevaluated@input, Spacer@80,
                  If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
                  Style["[Checkmark]", Green, 20], Style["x", Red, 20]]],
                  Top], times]
                  ];

                  Options[benchmark] = "CheckOutputs" -> True;

                  Options[DuplicatePositions] = Method -> Automatic;

                  DuplicatePositions[ls_, OptionsPattern[]] :=
                  With[method = OptionValue[Method],
                  Switch[method,
                  "UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
                  "UsePositionIndex", Values@PositionIndex@ls,
                  "UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
                  "UseGatherByLocalMap", Module[func, func /: Map[func, _] := ls;
                  GatherBy[Range@Length@ls, func]],
                  Automatic, Which[
                  ArrayQ[ls, 1, NumericQ],
                  DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
                  ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
                  MatchQ[_?IntegerQ .. ..]@ls,
                  DuplicatePositionsBy[ls, FromDigits],
                  True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
                  ]]];

                  DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern[]] :=
                  DuplicatePositions[fn /@ ls, opts];

                  tag = Map["S" <> ToString@# &, #, -1] &;
                  positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
                  groupBy[M_] := With[
                  rownames = Array["S" <> ToString[#] &, Length[M]],
                  Values@GroupBy[Thread[rownames -> M], Last -> First]];
                  nearest[M_] :=
                  DeleteDuplicates[
                  Sort /@ Nearest[M -> Automatic, M, [Infinity], 0]];
                  n = 10^4;
                  binaryVectors50k =
                  IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
                  fns =
                  groupBy,
                  (nearest@# // tag) &,
                  (DuplicatePositions@# // tag) &,
                  (DuplicatePositionsBy[#, FromDigits[#, 2] &,
                  Method -> "UseGatherByLocalMap"] // tag) &,
                  (positionDuplicates@# // tag) &
                  ;
                  benchmark[fns]@binaryVectors50k
                  n = 10^3;
                  binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
                  fns =
                  groupBy,
                  (DuplicatePositions@# // tag) &,
                  (DuplicatePositionsBy[#, FromDigits[#, 2] &,
                  Method -> "UseGatherByLocalMap"] // tag) &,
                  (positionDuplicates@# // tag) &
                  ;
                  benchmark[fns]@binaryVectorsRagged5k

                  n = 10^4;
                  binaryVectorsSparse10k := RandomInteger[1, n, n];
                  fns =
                  (DuplicatePositions@# // tag) &,
                  (positionDuplicates@# // tag) &,
                  (DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
                  groupBy;
                  benchmark[fns]@binaryVectorsSparse10k





                  share|improve this answer












                  $endgroup$



                  While this question repeats a previous query about finding DuplicatePositions, the duplicates here are amongst a list of binary vectors in contrast to the original duplicates occurring amongst a list of numbers. As illustrated in an answer to the original query however, the type, depth and distribution of inputs can significantly impact efficiency so there may well be further optimizations specific to this case of finding duplicates amongst binary vectors. The following summarises timings of the "superfunction" DuplicatePositions (collected and defined from answers to the original question - in particular Szabolcs, Carl Woll and Mr.Wizard), postionDuplicates (the fastest solutions for numbers from Szabolcs) and a tweeking in the "UseGatherByLocalMap" Method option (from Carl Woll), the accepted groupBy answer (by Roman) and the nearest answer (by Henrik Schumacher) for various types of binary vectors. I've contributed the "UseOrdering" Method in DuplicatePositions.



                  duplicatePositionsByOrdering[ls_]:= SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First]


                  which seems to do well for sparse vectors (a more succinct version of similar ideas used by Mr.Wizard and Leonid Shifrin in their anwers). Note that a random 1000x1000 binary matrix is very likely to be sparse to the point of no (row) duplicates occurring so presumably in the OP's situation the authentic data is not randomly generated and instead includes manufactured repeats. To the timings (the tag function just puts in the S1, S2 ... tags as originally requested and the tick indicates identical output):



                  enter image description here



                  Obviously timings aren't everything as short-clear functions can often be preferable (as well as potentially being more efficient for different inputs) but it can also sometimes be illuminating--here for example, indicating that GroupBy seems to recognize order for ragged vectors unlike GatherBy.



                  The code for the above output is below



                  SetAttributes[benchmark, HoldAll];

                  benchmark[functions_, opts : OptionsPattern[]] :=
                  Function[input, benchmark[functions, input, opts], HoldAll];

                  benchmark[functions_, input_, OptionsPattern[]] := Module[ToString[fn] -> RepeatedTiming@fn@x,
                  SeedRandom@0;
                  timesOutputs = Through[(tm /@ functions)@input];
                  times =
                  SortBy[Query[All, All, First]@timesOutputs, Last] // Dataset;
                  If[OptionValue@"CheckOutputs",
                  Labeled[times,
                  Row[ToString@Unevaluated@input, Spacer@80,
                  If[SameQ @@ (Query[All, Last, 2]@timesOutputs),
                  Style["[Checkmark]", Green, 20], Style["x", Red, 20]]],
                  Top], times]
                  ];

                  Options[benchmark] = "CheckOutputs" -> True;

                  Options[DuplicatePositions] = Method -> Automatic;

                  DuplicatePositions[ls_, OptionsPattern[]] :=
                  With[method = OptionValue[Method],
                  Switch[method,
                  "UseGatherBy", GatherBy[Range@Length@ls, ls[[#]] &],
                  "UsePositionIndex", Values@PositionIndex@ls,
                  "UseOrdering", SplitBy[Ordering@ls, ls[[#]] &] // SortBy[First],
                  "UseGatherByLocalMap", Module[func, func /: Map[func, _] := ls;
                  GatherBy[Range@Length@ls, func]],
                  Automatic, Which[
                  ArrayQ[ls, 1, NumericQ],
                  DuplicatePositions[ls, "Method" -> "UseGatherBy" ],
                  ArrayQ[ls, 2, NumericQ], DuplicatePositionsBy[ls, FromDigits],
                  MatchQ[_?IntegerQ .. ..]@ls,
                  DuplicatePositionsBy[ls, FromDigits],
                  True, DuplicatePositions[ls, Method -> "UsePositionIndex" ]
                  ]]];

                  DuplicatePositionsBy[ls_, fn_, opts : OptionsPattern[]] :=
                  DuplicatePositions[fn /@ ls, opts];

                  tag = Map["S" <> ToString@# &, #, -1] &;
                  positionDuplicates[ls_] := GatherBy[Range@Length@ls, ls[[#]] &];
                  groupBy[M_] := With[
                  rownames = Array["S" <> ToString[#] &, Length[M]],
                  Values@GroupBy[Thread[rownames -> M], Last -> First]];
                  nearest[M_] :=
                  DeleteDuplicates[
                  Sort /@ Nearest[M -> Automatic, M, [Infinity], 0]];
                  n = 10^4;
                  binaryVectors50k =
                  IntegerDigits[#, 2, 13] & /@ RandomInteger[n, 5*n];
                  fns =
                  groupBy,
                  (nearest@# // tag) &,
                  (DuplicatePositions@# // tag) &,
                  (DuplicatePositionsBy[#, FromDigits[#, 2] &,
                  Method -> "UseGatherByLocalMap"] // tag) &,
                  (positionDuplicates@# // tag) &
                  ;
                  benchmark[fns]@binaryVectors50k
                  n = 10^3;
                  binaryVectorsRagged5k = IntegerDigits[#, 2] & /@ RandomInteger[n, 5*n];
                  fns =
                  groupBy,
                  (DuplicatePositions@# // tag) &,
                  (DuplicatePositionsBy[#, FromDigits[#, 2] &,
                  Method -> "UseGatherByLocalMap"] // tag) &,
                  (positionDuplicates@# // tag) &
                  ;
                  benchmark[fns]@binaryVectorsRagged5k

                  n = 10^4;
                  binaryVectorsSparse10k := RandomInteger[1, n, n];
                  fns =
                  (DuplicatePositions@# // tag) &,
                  (positionDuplicates@# // tag) &,
                  (DuplicatePositions[#, Method -> "UseOrdering"] // tag) &,
                  groupBy;
                  benchmark[fns]@binaryVectorsSparse10k






                  share|improve this answer















                  share|improve this answer




                  share|improve this answer



                  share|improve this answer








                  edited May 8 at 23:08

























                  answered Apr 26 at 13:40









                  Ronald MonsonRonald Monson

                  3,40319 silver badges37 bronze badges




                  3,40319 silver badges37 bronze badges
















                      Popular posts from this blog

                      Tamil (spriik) Luke uk diar | Nawigatjuun

                      Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

                      Where does the image of a data connector as a sharp metal spike originate from?Where does the concept of infected people turning into zombies only after death originate from?Where does the motif of a reanimated human head originate?Where did the notion that Dragons could speak originate?Where does the archetypal image of the 'Grey' alien come from?Where did the suffix '-Man' originate?Where does the notion of being injured or killed by an illusion originate?Where did the term “sophont” originate?Where does the trope of magic spells being driven by advanced technology originate from?Where did the term “the living impaired” originate?